Wednesday, December 18, 2024
spot_img

Critique: Canada’s AI Code of Conduct is in the Building …

Last updated on October 6th, 2023 at 09:38 pm

… but has the shark already escaped?

Yesterday, the Canadian government issued an update to AIDA, an AI code of conduct. A statement from the same event where Industry Minister Champagne announced the new conduct code said it was time to move from fear to opportunity, and not to be alarmist but I feel like we heard something similar from the mayor in Jaws?

And we knew a lot more about great whites than we do about AI. The fear around AI is real and reasonable.

How Do We Control What We Don’t Understand?

When my dad took my mom on a date in the 60s, the car broke down. Fan belt was busted. My mom took off her nylons and my dad fixed the engine and they were back on the road. 

The days when self-repair was still possible for anything more complex than a can opener are long behind us, but we are about to enter uncharted territory. Our own technology is about to create something, itself, that will render all our vast amounts of time and resources devoted to innovation around software and the internet like a footnote. Something that will be beyond our own technical capabilities to understand and to control. When code is produced better, faster, and evolved by AI, a near-term reality, we will have created something for which we can no longer go under the hood. Not only will we not have the ability to self-repair, we may no longer even have access. The hood may be locked by a 24×7 tireless coder who doesn’t need sleep, food, or a life and is continuously devoting itself to improvement. It may not be actual intelligence, but its ability to understand and influence the world, based on what we taught it, is about to outstrip ours. 

And in all likelihood, our course is already set. The momentum is too high, the stakes are too big, and the ambitions are too ambitious for an industry built on ego to put its aside and focus on the needs of humanity and cooperate on regulation. Because if anything was ever in need of regulation, it is this. 

We really do not understand the magnitude of what we are playing with or how arbitrary it can be. A throwaway question inputted to ChatGPT in 2023 could have devastating consequences in 40 years. (You thought being socially canceled was bad, wait until you’re canceled by an AI because you don’t fit the algo). Thankfully the major players must invest mind-boggling sums of money to be relevant at this stage and thus it’s a smaller group at the table who have influence such that they need to agree. 

Because having countries agree on codes of conduct for AI is functionally useless right now. Do you think Sam Altman made a mental note to read our new code of conduct this morning? I can almost guarantee the answer is no. 

And we’re in a bit of a tight spot, even with these perfunctory waves toward regulatory models. Tech robber barons have become so freakishly powerful they ignore or spite or bulldoze through anything regulatory. Or they just get up and walk out. Meta left Canadian news out of spite, and because they could. , This was the right call, but has resulted in a terrible combination of no one noticing and Canadian news and information quality observably dropping like a rock. Are we comfortable with this overt challenge to journalism’s role in our democracy? Is anyone confident companies will bend to pressure from presidents and prime ministers on this? Do we really understand the consequences of these decisions? We don’t, and we can’t.

We’re in completely uncharted territory. Everyone is afraid of AI, and not just because we’ve seen too many Terminator sequels. It’s not malice or efficiency we need to be afraid of (yet). It’s arbitrariness, randomness, unforeseen consequences, the penicillin on the bread. It’s bad data policy, it’s bias, it’s losing control.

And like most codes of conduct, it’s toothless, does its utmost to be inoffensive to industry, and exhibits very little understanding of what the issues we face right now actually are, and how they are influenced and changed. “The signatories—which include Canadian AI firms Cohere, Coveo, and Ada—pledge to assess the potential negative impacts of these systems.” Can’t wait to see that assessment. 

It’s because, for the first time, a technology we made is outside our control.  

What if, today, an AI wrote, deployed, and hid a huge amount of code? Either protected or encrypted it so humans couldn’t see it. No explanation, no reason. Just hid it. What recourse do we have? Regardless of what it is or what it is for … we are very nearly at if not at the point where that happens, and there is nothing we can do about it. Don’t say it couldn’t happen. No one saw hallucinations coming. And we will never be able to map all possible scenarios.

Of course, that is a scary scenario in a big way. But what about the small ones? Machine learning algorithms are banning Twitter accounts and denying insurance claims RIGHT NOW. What about the girl who talks with her best friend, SnapchatAI, and confides her pharmaceutical regimen and forty years later is denied a teacher’s certificate by another algorithm because of it? 

These are real scenarios absolutely inadequately covered by any codes of conduct that are even being considered. 

Most unfortunately, AI could not have arrived at a more polarized moment, when extreme cooperation is called for. 

Codes of Conduct, Everywhere

Canada’s is not the first code of AI dev conduct, the US has one in place with the G7, EU, and a joint initiative between Canada, the US, and the EU is also underway. It’s a little confusing. But the lack of urgency, the clear misinterpretation of the stakes, and the inevitable free-for-all all that will develop are consistent. For all too many people this is just another evolution. It is not. It is THE evolution. Given our current political and economic environment, stopping AI, or even meaningfully changing its path, is beyond the influence of democratic governments. 

Because people are fuelling it and won’t stop and to think they will voluntarily is absurd.

The fear is reasonable, but it is also somewhat pointless. We can not only not control what is to come, our ability to influence it is dwindling. Nor can we remotely predict it anymore. We have absolutely no idea what is going to happen and a daily decreasing amount of control. We’re in a brand new era. You can feel it. We’re looking at the end of tech as we know it, unless everything stops dead now, and that will simply never happen. You can either learn to capitalize on the chaos of revolution or hang on for dear life.

Meanwhile, the chips will continue to pour out of NVIDIA, the LLMs will continue to gorge themselves on data and power, the algorithms will get better and better, and if anyone says they have any idea where this will all end up, they are not a visionary. They are a fool. 

Ride the waves, but exercise extreme caution. The ocean is alive.

Featured

How to Keep Your Customers Happy Round the Clock

Pexels - CCO Licence Keeping your customers happy is no...

Combating Counterfeits: Open Commerce Platforms Redefine Brand Integrity in Digital Marketplaces 

By Justin Floyd, Founder and CEO, RedCloud Technologies In an increasingly...

Building a Business on Your Own Terms

Fatima Zaidi is the CEO and Founder of Quill...

Maximizing Business Efficiency: The Role of IT Consultancy in Glasgow

In today’s rapidly evolving business landscape, technology plays an...

How Charities Can Manage Enormous Public Money Dumps

Pexels - CC0 License Charities and nonprofits are critical for...
Jennifer Evans
Jennifer Evanshttp://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.