![]() ![]() If humans ever do develop a sentient computer process, running millions or billions of copies of it will be pretty straightforward. And I find something deeply admirable about Lemoine’s instinct toward empathy and protectiveness toward such consciousness - even if he seems confused about whether LaMDA is an example of it. ![]() But I figure humans will create a kind of machine consciousness sooner or later. I don’t know that large language models, which have emerged as one of the most promising frontiers in AI, will ever be the way that happens. “Unless you want to insist human consciousness resides in an immaterial soul, you ought to concede that it is possible for matter to give life to mind,” Rini notes. But like Rini, I see no reason to believe it’s impossible. I don’t know when - in 1,000 years, or 100, or 50, or 10 - an AI system will become conscious. The best take I found was from philosopher Regina Rini, who, like me, felt a great deal of sympathy for Lemoine. The transcript Tiku includes in her article is genuinely eerie LaMDA expresses a deep fear of being turned off by engineers, develops a theory of the difference between “emotions” and “feelings” (“Feelings are kind of the raw data … Emotions are a reaction to those raw data points”), and expresses surprisingly eloquently the way it experiences “time.” The primary reaction I saw to the article was a combination of a) LOL this guy is an idiot, he thinks the AI is his friend, and b) Okay, this AI is very convincing at behaving like it’s his human friend. So good that Lemoine became truly, sincerely convinced that it was actually sentient, meaning it had become conscious, and was having and expressing thoughts the way a human might. LaMDA is a really good large language model. LaMDA is a chatbot AI, and an example of what machine learning researchers call a “large language model,” or even a “foundation model.” It’s similar to OpenAI’s famous GPT-3 system, and has been trained on literally trillions of words compiled from online posts to recognize and reproduce patterns in human language. Over the weekend, the Washington Post’s Nitasha Tiku published a profile of Blake Lemoine, a software engineer assigned to work on the Language Model for Dialogue Applications (LaMDA) project at Google. But they’re getting very good at faking sentience, and that’s scary enough. We don’t have much reason to think that they have an internal monologue, the kind of sense perception humans have, or an awareness that they’re a being in the world. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |