March 18, 2023

Thinking about AI

Tim Bray has a good, balanced post about AI and large-language-models (LLMs). He seems a bit more neutral on it than I am, but I agree with what he says.

Elsewhere, I've been disappointed to see a bunch of people whose opinions I respect getting excited and cheerleading for it uncritically. They're (I guess, almost by definition, given they're folk who've done well in tech) all comfortable, (mostly) white men. They're not the techbros; they're people I'd expect to have better critical faculties and to do a better job of tempering their enthusiasm with potential risks. They've lost some of my respect as a result. (Just for clarity, Tim Bray isn't in this group)

Their interest shows that there's some utility in the technology, unlike blockchain. If anything that makes me more, rather than less, nervous about it. I don't have a clear view of my concerns, and maybe they'll be mitigated, but I think it's useful to share them and try to work out my thinking in public.

Who owns the tech? The tech world has benefitted massively over the past decade or two from open source greatly reducing the barriers to innovation and development. At present the AI tech is locked behind APIs and privately owned. There do seem to be moves to opening that up though, so I'm least worried about this. Opening up access will, however, risk removing what little guard-rails are in place, which will no doubt make my next concern even worse.

Companies will use the tech to drive down costs, which will drive down the acceptable quality floor. Think about how annoying interactions with call centres are at present; it's about to get more annoying and soul-destroying. Look at how search results are pages of poorly-researched content farm output, rather than the expert's blog post they're all based on; that blog post will be further buried under AI-generated sites confidently telling you things you'll have to hope are correct.

And this is my biggest concern. We've already got a huge problem with the output of computers and algorithms being taken as some sort of objective truth. The Post Office scandal shows how much suffering this can cause; even driving people to suicide. Over-confident AI will be deployed by people who don't understand it, or don't care about the risks because (they hope) they won't be affected by the downsides.

Posted by Adrian at March 18, 2023 10:49 AM | TrackBack

This blog post is on the personal blog of Adrian McEwen. If you want to explore the site a bit further, it might be worth having a look at the most recent entries or look through the archives or categories over on the left.

You can receive updates whenever a new post is written by subscribing to the recent posts RSS feed or

Comments
Post a comment









Remember personal info?





Note: I'm running the MT-Keystrokes plugin to filter out spam comments, which unfortunately means you have to have Javascript turned on to be able to comment.