Co-Intelligence: Living and Working with Artificial Intelligence by Ethan Mollick was published in April of 2024. I read it last summer and was encouraged by its balance of playfulness, thoughtful analysis, and clarifying frameworks for the opportunities and valid issues in working with artificial intelligence. The following is a summary of what you will find in the text and a list to grow your bookshelf for understanding the opportunities and costs as Generative AI infiltrates our devices, work, and life.
Image 1. Cover photo of the book, Co-Intelligence: Living and Working with AI. Source: Amazon.com
Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, Associate Professor of Management, and Co-Director of Generative AI Labs at The Wharton School at the University of Pennsylvania. This is important in two respects: he knows AI well and works with it from a business perspective, and he chose to write a book that is for the public, not academics. It is free of jargon, and while it reveals a deep understanding through a career of research, it is written more from a self-discovery perspective and life experience with Artificial Intelligence in its earlier forms.
I appreciate the moments in the book when Mollick clarifies the functionality of the technology — how it generates work and why this model of artificial intelligence, these large LLMs, often produces bias or flawed results. It is always making a well-trained guess based on its core training data set and a logic based in a language system, creating correlations between and through the words (prompts) it is given.
“AI can have some unexpected weaknesses. For one thing, they don’t actually “know” anything. They are simply predicting the next word in a sequence, they can’t tell what is true or what is not. It can help to think of the AI as trying to optimize many functions when it answers you, one of the most important of which is “make you happy.”
One of the more significant frameworks posited in the book is that large language models are general-purpose technologies, similar to the creation of the world wide web (HTML) or the invention of the Guttenberg Press. It is interesting that the acronym GPT (general purpose technology) is the same as the core technology within generative systems: a Generative Pre-trained Transformer — but don’t confuse the two! The general purpose infusion of AI in our lives is happening daily on our mobile devices (AI Assistants) and on the web, with AI summaries of traditional searches, AI-conducted searches on platforms like PerplexityAI, or shopping aids with Google image capture (grab the image and find that item where you can buy it).
Part I
Central to part 1 of the text are Mollick’s four core principles, detailed below. I have embraced them in my work and encourage you to do the same, as provided within your ethical frameworks and levels of comfort with the well-documented concerns regarding LLMs, including copyright infringement in the training, biases within the data and results, the environmental cost of using the tools, and, of course, data privacy and security.
PRINCIPLE 1: Always Invite AI to the Table
“You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving.” The key is to better understand how it can help you and how you might need to consider adapting your work if it seems to be threatening your job.
PRINCIPLE 2: Be the Human in the Loop
“The concept of ‘human in the loop’ has its roots in the early days of computing and automation. It refers to the importance of incorporating human judgement and expertise in the operation of complex systems (the automated loop). Today the term describes how AIs are trained in ways that incorporate human judgement.”
LLMs are guessing based on training data, essentially making them prediction machines that can sound very plausible. But they currently will make up “plausible but wrong citations and facts.” The human in the loop needs to be always skeptical of what the AI produces. Chat-based AI’s is designed to provide answers that fit the narrative you place it in.
PRINCIPLE 3: Treat AI Like a Person (but tell it what kind of person it is).
Anthropomorphism is ascribing human characteristics to something nonhuman. LLMs feel like they are chatting to you and with you and often feels like a person. Some AI models, like Pi, are designed to have a particular personality, but AI doesn’t have emotions, a sense of self, or physical sensations. While we, as humans, are inclined to anthropomorphize, this inclination can become dangerous if we begin ascribing feelings, intentions, or levels of knowledge to an LLM. In fact, “a lot of researchers are deeply concerned about the implications of casually acting as if AI is a human, both ethically and epistemologically. As researchers Gary Marcus and Sasha Luccioni warn, ‘The more false agency people ascribe to them, the more they can be exploited.’”
Mollick suggests that you think of AI as your collaborator. Perhaps a fast intern. But to do this, you have to tell the AI who it is. “Answer the following as though you are a marketing intern, supporting in social media…” then give it a prompt to do something you would normally ask an intern to do.
PRINCIPLE 4: Assume this is the worst AI you will ever use.
This is why I recommend building an AI bookshelf. As AI continues to evolve, you have to continually acquaint yourself with the ideas and frameworks, opportunities and risks. There are also some amazing podcasts out there that do the same. My favorite, the Marketing AI podcast with Paul Roetzer and Mike Kaput, shares Weekly updates on all the news.
Part II
In Part II, Mollick hones in on particular approaches to using AI in your work and life: AI as a person, a creative, a coworker, a tutor, a coach, and as our future. The book ends with an epilogue “AI as Us.”
The chapter, “AI as a Person” helps the reader understand the critical difference between our usual understanding of a computer program or software product and AI. It does not behave like other software. In fact, as Mollick points out “ AI is terrible at behaving like traditional software. Traditional software is predictable, reliable, and follows a strict set of rules. AI, on the other hand, is anything but predictable and reliable. It can surprise us with novel solutions, forget is own abilities, and hallucinate incorrect answers.” It will sometimes refuse to do an action. AI excels at writing, analyzing, coding and chatting. It can fail at arithmetic, and struggles repeating processes consistently. AI systems “make mistakes, tell lies, and hallucinate answers, just like humans.”
Mollick points out — and having experimented, I heartily agree — that since each system has its own strengths and weaknesses, you need to play with multiple systems to find the ones that perform the most reliably in the tasks you need them to do. Furthermore, you should approach your AI in a conversational mode, noting Principle 3, always helping the AI know who it should be behaving like in the conversation.
Chapter 5 focuses on AI’s creativity, which, as he notes, AI is really good at. It is bad at repetition but excellent at writing, making art, music and video. These creative tasks, not the automated non-generative AI tools, are what make the technology exciting and fun. These areas are also where the greatest issues of copyright abound and, for educators, the question of authenticity.
Chapter 6 focuses on AI as a coworker. AI is really good at doing a lot of things that we do and really bad at doing a lot of things that we do. The problem is that we still don’t know where the lines are. This is called the Jagged Frontier. It simply isn’t clear which tasks or levels of tasks AI can do well - yet.
Mollick suggests having “just me tasks” and “delegated tasks,” which like an intern, you assign to AI but check their work. These tasks can be very complex, it simply takes intentional prompting. Then there are “Automated Tasks” where you let the AI do the work and you don’t check it. Many of us have Google Alerts or email rules that work in this fashion.
Chapter 7 offers the idea that AI can be a tutor, and chapter 8 expands this idea to AI as a coach. This is something, as an educator, I see happening with my students. If they do not understand a concept, they ask AI. They often ask AI to do much more, and there is no way to prove that they are turning in something that was created by AI. However, an AI-written essay will likely not quite meet the mark for your assignment (I joke AI frequently earns a B) and it have some tell-tale signifiers, but there is no way to detect who wrote what. Ultimately, for all of us, we need to think of AI as accountants and mathematicians did at the introduction of the calculator. It is a tool that can be used well or abused.
Teaching employees or students how to use AI effectively as a tutor can be incredibly useful (consider the amount of future money saved on training). It can also be helpful for side gigs or for hobbies. AI as a tutor or coach can have many roles.
Chapter 9 focuses on our future. Mollick notes how it feels as though we are living through a moment of science fiction. I often say it is an uncomfortable Black Mirror episode - hopefully one that we get to choose the ending to. Mollick offers several scenarios: a) AI will hit a plateau, and it will be basically as good as it gets; b) AI has slow growth; c) AI has exponential growth and d) AI as the Machine God that various movies and novels portend.
Recommended Reading
One of the lovely aspects of this book is the way Mollick shares how he is playing with AI in the act of writing this book. He demonstrates value quite literally using prompts and results. It really makes for a fun read. In honor of that process, I wanted to say that keeping up with AI is a critical skill in today’s world. I have curated a collection of recommended books below, but I also wanted to ask Claude.ai what they recommend. My prompt was “If you were a librarian and someone asked you for recommendations for books about Generative AI and work, what would you recommend??”
Image 2. A screen shot of a chat with Claude.ai requesting a list of GenAI book recommendations.
Claude-Recommended Reading
Technical and Strategic Insights
The AI Revolution in Work by Erik Brynjolfsson and Andrew McAfee
A deep dive into how AI is transforming workplace dynamics
Explores the potential of generative AI to reshape productivity and job roles
Provides strategic insights for businesses and workers
AI Superpowers by Kai-Fu Lee
While not exclusively about generative AI, offers crucial context on AI's broader impact on work
Explores the economic and workforce implications of advanced AI technologies
Provides a global perspective on AI's potential
Practical Guides
Working with AI: Real Stories of Human-Machine Collaboration by Thomas H. Davenport and Steven M. Miller
Focuses on practical applications of AI in various industries
Includes case studies of successful human-AI collaboration
Helps readers understand how to effectively integrate AI into work processes
The Age of AI: And Our Human Future by Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher
Examines the broader philosophical and practical implications of AI in work and society
Provides a nuanced look at how generative AI is changing human capabilities
Skill Development and Adaptation
Artificial Intelligence and the Future of Work by Thomas Kochan
Analyzes how AI is transforming job markets
Offers strategies for workers to adapt and thrive in an AI-driven workplace
Provides insights into skill development and career planning
Ethical and Social Considerations
Generative AI and the Future of Work by Paul Scharre
Explores the ethical implications of AI in the workplace
Discusses potential societal impacts and challenges
Provides a balanced view of AI's potential benefits and risks
Technical Deep Dive
AI 2041: Ten Visions for Our Future by Kai-Fu Lee and Chen Qiufan
A unique blend of technical insights and speculative fiction
Explores potential scenarios of AI integration in various work contexts
Provides both technical and imaginative perspectives on AI's future
Recommended Approach
Start with broader overviews like Brynjolfsson and McAfee's work
Progress to more specific guides like Davenport and Miller's book
Supplement with forward-looking perspectives from Kai-Fu Lee
Consider the ethical implications discussed in Scharre's work
Note: The field of generative AI is rapidly evolving, so complement these books with current articles, research papers, and industry reports to stay up-to-date.
Human-Recommended Reading
Books I would recommend adding to your bookshelf include:
Brave New Words: How AI Will Revolutionize Education (and Why That's a Good Thing) by Salman Khan (founder of Khan Academy)
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford
Solomon's Code: Humanity in A World of Thinking Machines by Olaf Groth
Superagency: What Could Possibly Go Right With Our AI Future by Reid Hoffman
The Equality Machine Harnessing Digital Technology for a Brighter, More Inclusive Future by Orly Lobel
ChatGPT, AI, and the Future of Writing edited by Elizabeth Avery Hurt