Over the past couple of decades AI tools of varying degrees of complexity and proficiency have become a part of our daily lives…in communication, banking, healthcare, shopping, navigation, travel and more. In many ways, they’ve made our lives easier and simpler, whether it is ordering groceries online, making digital payments, or making our way around an unfamiliar city. Most of us can’t remember what it was like to live without a smart phone. And with the advent of smart homes, self-driving cars and advanced robotics, AI is here to stay. And we have so far, mostly agreed, that it’s a good thing.
But something shifted in late 2022, with the release of ChatGPT, the first form of "Generative AI". Unlike traditional AI that focuses on analysing and understanding existing data, generative AI has the ability to learn and analyse vast datasets, and create something entirely new/original from this – text, images, music, videos, and even 3D models, often with human-level proficiency. This has led to the rise of AI influencers on Instagram, AI generated images of famous people doing and saying things that they never actually did, leading to the rise of misinformation, and fears that actors, artists, musicians, and writers might find themselves competing against, and maybe even being replaced by AI.
The technology is improving at an exponential rate, and we have to ask ourselves what else it can do. How much better can it get? And more importantly, can we control it? The present discourse on AI and the panic among some is driven by concerns that run the gamut from fears of mass replacement of humans in most professions, to a darker HAL from “2001: A Space Odyssey" kind of vision of robots going rogue, or the Skynet scenario from the Terminator movies, of robots simply taking over.
On the other hand, there are experts who hold the view that while AI can impersonate a human, taking all of the ideas and language that humans have produced, and stringing them into a word cocktail that makes sense and seems human, it still lacks consciousness, emotions, ethics, and agency, and is at the end of the day, nothing more than a tool.
It’s a confusing time, and we all need to be better informed about the future that is coming at us, faster than we realise. As it happens, there are several thoughtful and well-researched books, written by industry insiders, journalists, technologists, and academics, that lay out the issues clearly and explain what is actually happening. Since this is an area of interest for us, we have quite a few of these books at Luna, and we hope to keep adding the work of the best minds on this subject. Here’s a list of six books that we recommend.
A New Dark Age: Technology and the End of the Future by James Bridle: Bridle, an artist and technologist, highlights how the exponential growth of information, coupled with the opacity of algorithms and the prevalence of misinformation, has created a sense of cognitive overload and disorientation. He challenges the notion of infinite progress through technology, emphasizing the inherent unpredictability of complex systems.
Madhumita Murgia’s Code Dependent explores the concept of "data colonialism," highlighting how the vast data generated by users in the developing world often benefits powerful tech companies in the West, with little returned to those communities. There are, to be sure, areas in which AI is an important tool, but it has its limitations. This book humanizes the impact of AI by telling deeply affecting personal stories of those impacted by algorithmic bias, surveillance, and the erosion of privacy. Murgia calls for greater public engagement in discussions about the future of AI, and the need for regulation to protect individuals and communities from potential harm.
"Moral AI: Can Artificial Intelligence Be Taught to Be Good?” by Jana Schaich Borg, along with Walter Sinnott-Armstrong and Vincent Conitzer: This book delves into the complex and pressing question of whether artificial intelligence (AI) can be instilled with a sense of morality. The authors explore multiple approaches to teaching AI systems moral principles, including reinforcement learning, value alignment, and incorporating ethical guidelines into their design. They examine the real-world implications in various domains, such as self-driving cars, healthcare, and warfare, raising questions about responsibility and accountability.
Co-Intelligence, by Ethan Mollick: Wharton professor Ethan Mollick, has become one of the most prominent explainers of AI, focusing on the practical aspects of how these new tools for thought can transform our world. In Co-Intelligence, Mollick urges us to engage with AI as a co-worker, co-teacher, and coach. He assesses its profound impact on business and education, using dozens of real-time examples of AI in action.
The Coming Wave, by Mustafa Suleyman: Mustafa Suleyman, the CEO of Microsoft AI and co-founder of Deep Mind shows how unchecked, and unregulated AI can threaten the nation state, and how we can successfully contain these powerful technologies to avoid catastrophic or dystopian outcomes. Amidst unprecedented peril and extraordinary promise, this is a stark warning but also a hopeful guide to where society goes next.
The Worlds I See, by Dr Fei-Fei Li: In addition to being a compelling memoir that covers Dr Li’s formative years and challenges (she came to the U.S from China at a young age), this book offers her insights into the evolution of AI, including her pivotal role in creating ImageNet, a massive dataset that significantly advanced the field. It offers a clear and accessible explanation of AI's history and potential, through the lens of Dr Li's personal experiences, while emphasising the need for ethical considerations and diverse perspectives in the field.
Comentarios