Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. The problem isn’t the technology-that’s advancing faster than even the experts had guessed-it’s who owns it. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities. It could advocate on your behalf with third parties: either other humans or other bots. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could act as your tutor, answering questions interactively on topics you want to learn about-in the manner that suits you best and taking into account what you already know. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could write the first draft of anything: emails, reports, essays, even wedding vows. Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. It can help you find information, express your thoughts, correct errors in your writing, and much more. And we all need to understand how it works, at least a little bit.Īmid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)-and large language models (LLMs) like ChatGPT and GPT-4-one optimistic vision is abundantly clear: this technology is useful. This means, at a minimum, the technology needs to be transparent. For it to be trustworthy, it must be under our control it can’t be working behind the scenes for some tech monopoly. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?įor AI to truly be our assistant, it needs to be trustworthy. Imagine you’re using an AI chatbot to plan a vacation. ![]() We should get in the habit of questioning the motives, incentives, and capabilities behind them, too. We will all soon get into the habit of using AI tools for help with everyday problems and tasks. Tags: artificial intelligence, essays, risks instead of one constrained by your principles.ĮDITED TO ADD: Ted Chiang’s previous essay, “ChatGPT Is a Blurry JPEG of the Web” is also worth reading. that pursues shareholder value above all else, and most companies will prefer to use that A.I. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. that only offers pro-social solutions to the problems you ask it to solve. Note that you cannot simply say that you will build A.I. Yet such software could easily still cause as much harm as McKinsey has. that’s entirely obedient to humans-one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. ![]() as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. ![]() The question we should be asking is: as A.I. Ted Chiang has an excellent essay in the New Yorker: “Will A.I.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |