![]() The group has so far received hundreds of research-grant proposals, funded dozens of them and held major meetings on the topic. Such alarm bells prompted Musk (co-founder of Tesla Motors and founder of SpaceX) to donate $10 million to the Future of Life Institute (and serve, with Hawking and others, as a scientific adviser for the cause). Education? The arts, culture, service jobs? Or what, exactly?” We should think hard about the sort of jobs that we would like to keep doing and getting our identity from. ![]() “Today so much of our sense of purpose comes from our jobs. He imagines that, finances aside, the loss of jobs will also mean a loss of human fulfillment. (Some Japanese banks already employ robots to assist customers.) “If you teach kindergarten or you're a massage therapist, you'll get to keep your job a lot longer,” Tegmark says. The first to go, of course, will be the ones that are the most repetitive or the most easily automated, such as store clerks, tax preparers and paralegals. On a more day-to-day scale, robots will likely take even more of our jobs. (Hawking, Musk and Apple co-founder Steve Wozniak were among the letter's 2,500 co-signers.) The United Nations is discussing a ban on AI weapons. In July, Tegmark's group released an open letter expressing alarm over the rising threat of autonomous weapons-a terrorist's dream. “If you tell your super-AI car to get to the airport as fast as possible, it'll get you there-but you'll arrive chased by helicopters and covered in vomit.” Not exactly as intended.īut there are bigger dangers. Programming machines to obey us precisely can backfire in unexpected ways. “The funny thing about Asimov's novels,” Tegmark says, “is almost all of the Three Laws stories are about how something goes wrong with them.” For example: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Wouldn't that kind of software safeguard work? In many of Isaac Asimov's futuristic tales, humans programmed robots with the Three Laws of Robotics. “It could be wonderful, or it could be pretty bad,” Tegmark says. It will quickly become so much smarter than humans that-well, we don't actually know. The worry is that once AI gets smart enough, it will be able to improve its own software, over and over again, every hour or minute. But with more powerful technologies like human-level artificial intelligence, we want to get things right the first time.” “When we invented less powerful technology, like fire,” Tegmark told me, “we screwed up a bunch of times then we invented the fire extinguisher. In 2014 he co-founded the Future of Life Institute, whose purpose is to consider the dark side of artificial intelligence. But Gates, Hawking and Musk?Īs it turns out, all three were responding to an initiative by Massachusetts Institute of Technology professor Max Tegmark. It's one thing for an easily spooked public to mistrust artificial intelligence. ![]() “Full artificial intelligence could spell the end of the human race,” Hawking has told the BBC. Now luminaries-including Bill Gates, Stephen Hawking and Elon Musk-are speaking out about the dangers of our increasingly smart machines. And two-legged, walking robots are suddenly real. Drones have gotten smart enough to avoid hitting things. Self-driving cars have logged nearly two million miles on public roads. ![]() In the past three years, though, something has shifted. But for decades, real robots have been little more than assembly-line arms at car factories. Movies always depicted them as walking, talking, humanoid, smart-and cool. For most of my life, I've been disappointed in robots.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |