The similarities and contrast between this book and AI Snake Oil are striking. For example, AI Snake Oil describes generative AI as something which largely works but is sometimes wrong, whereas this book is very concerned about how they’ve been rushed out the door in the wake of the unexpected popularity of ChatGPT despite clear issues with hallucinations and unacceptable content generation.
Yet the books agree on many things too — the widespread use of creators’ content without permission, weaponization of generative AI political misinformation, the dangers of deep fakes, and the lack of any form of factual verification (or understanding of the world at all) in the statistical approaches used to generate the content. Big tech has no answer for these “negative externalities” that they are enabling and would really rather we all pretend they’re not a thing. This book pushes much harder on the issue of how unregulated big tech is, and how it is repeatedly allowed to cause harm to society in returns for profits. It will be interesting to see if any regulation with teeth is created in this space.
I find the assertion made in this book that large language models should not be open sourced because then “anyone can use them” particular weird, especially the statement that Facebook was wrong to release code they developed without consulting the wider world — that’s absolutely not how copyright law works, let alone computer security. The reality is that once a given technology exists it is almost certain that bad actors will acquire it — we completely failed to stop them acquiring nuclear weapons despite huge amounts of effort for example. This is especially true when even the book agrees that big tech doesn’t appear to have much of a moral compass on what these systems should be allowed to do (except make money for their creators of course). Additionally, I would have thought by now that there wouldn’t be anyone left arguing in favour of security through obscurity but here we are.
I happened to be reading this book just as DeepSeek R1 was released as an open source model that is massively cheaper to train than the existing generative models. This mitigates much of the environmental hazards of generative AI, caused an equally massive drop in NVIDIA’s stock price as global demand for GPUs is likely to be commensurately smaller, and further demonstrates that open development is in fact the way the world wants to work.
This book was very readable, but felt less well researched that AI Snake Oil. Honestly I preferred the other book to this one.
Computers
MIT Press
September 17, 2024
247

How Big Tech is taking advantage of us, how AI is making it worse, and how we can create a thriving, AI-positive world. On balance, will AI help humanity or harm it?