Category: Tech

Ghost in the Cloud

Meghan O'Gieblyn for n+1
O’Gieblyn_web

Meghan O’Gieblyn writes for n+1 on the relationship between transhumanism and religion.

I DO PLAN TO BRING BACK MY FATHER,” Ray Kurzweil says. He is standing in the anemic light of a storage unit, his frame dwarfed by towers of cardboard boxes and oblong plastic bins. He wears tinted eyeglasses. He is in his early sixties, but something about the light or his posture, his paunch protruding over his beltline, makes him seem older. Kurzweil is now a director of engineering at Google, but this documentary was filmed in 2009, back when it was still possible to regard him as a lone visionary with eccentric ideas about the future. The boxes in the storage unit contain the remnants of his father’s life: photographs, letters, newspaper clippings, and financial documents. For decades, he has been compiling these artifacts and storing them in this sepulcher he maintains near his house in Newton, Massachusetts. He takes out a notebook filled with his father’s handwriting and shows it to the camera. His father passed away in 1970, but Kurzweil believes that, one day, artificial intelligence will be able to use the memorabilia, along with DNA samples, to resurrect him. “People do live on in our memories, and in the creative works they leave behind,” he muses, “so we can gather up all those vibrations and bring them back, I believe.”

Technology, Kurzweil has conceded, is still a long way from bringing back the dead. His only hope of seeing his father resurrected is to live to see the Singularitythe moment when computing power reaches an “intelligence explosion.” At this point, according to transhumanists such as Kurzweil, people who are merged with this technology will undergo a radical transformation. They will become posthuman: immortal, limitless, changed beyond recognition. Kurzweil predicts this will happen by the year 2045. Unlike his father, he, along with those of us who are lucky enough to survive into the middle of this century, will achieve immortality without ever tasting death.

But perhaps the Apostle Paul put it more poetically: “We will not all sleep, but we shall all be changed.”

(…)

I, Cyborg

Jennifer Gersten writing for Guernica
cyborg

Jennifer Gersten Skypes Neil Harbisson and Moon Ribas, the world’s ‘first cyborgs’, for Guernica magazine:

My Skype call with the cyborgs drops for the second time. They’re traveling, they explain, and the Internet is bad. The app gurgles, failing to connect us. I end up addressing my questions to their account’s profile picture, an image of the Earth. In a corner of the screen, a small rectangle reflecting my upper body floats like a minor planet.

The account belongs to Neil Harbisson, who is one half of the cyborg duo. His username is “Neil Harbisson’s Head,” which is fitting, as he’s connected to Skype through the thin black antenna that he had surgically attached to his skull about thirteen years ago. Our technical difficulties persist and we never do get to see each other, a circumstance I’m left trying to reconcile with the knowledge that for many, what he and his artistic partner Moon Ribas are doing represents the cutting edge of human ingenuity. Millions have watched the TED talks in which Harbisson and Ribas explain how their cyborg bodies came to be and the art their extended senses allow them to create. Harbisson, thirty-four, was born with achromatopsia, a type of colorblindness that limits his vision to black and white. Curious about what seeing color would be like, he developed an antenna that gives him a kind of synesthesia, allowing him to hear color waves translated as sonic signals; the first colors he heard belonged to a Windows logo on a nearby device. “It was really magical,” he recalls. Ribas, thirty-one, hoping to deepen her connection to nature, had a chip implanted in her elbow that sends tremors down her arm whenever earthquakes occur.

They haven’t seen Westworld, the western android TV thriller based on the Michael Crichton film of the same name that debuted in the fall of 2016. They’ve heard of Donna Haraway’s “A Cyborg Manifesto,” considered a founding text in cyborg theory, but say they haven’t read it. They’ve never cared for science fiction. While growing up together in Catalonia, they were interested in animals and the natural world. Technology, by contrast, was cold and distant. They spent most of their time in the woods. “My aim was not to become a cyborg,” Harbisson says. “It was to sense color.” Being able to hear colors, including ultraviolet and infrared, which are invisible to the human eye, strengthened his conviction that “human” failed to describe his new self. Eventually, he felt no difference between where the technology ended and his human body began. “The only word that really described this is ‘cyborg,’” he says.

Their surgeries took place in secret—their doctors feared losing their licenses over the probable media backlash—and were not without risk. “We were never really scared,” Ribas says of the process. “It was exciting, it was always an experiment. When you are so curious about something, everything else doesn’t really matter.” At first, their brains rebelled. Tremors from larger earthquakes woke Ribas up at night before she grew accustomed to the sensations. “Now I feel like I have two heartbeats: my own, and the earth, beating at its own rhythm,” she says. During his initial months with the antenna, Harbisson suffered from headaches and was often exhausted. “It was an overload,” he remembers. “I was hearing color everywhere. It wasn’t a good start. But after five months, my brain got used to it.” Though his mother disapproved, she eventually came around.

The press has salivated over their apparent novelty, Harbisson’s in particular. The BBC described him as “the first legally recognized cyborg,” as in 2004 he was permitted to pose for his United Kingdom passport photo with his antenna intact. Sometimes this title is shorthanded: a Google search for “the first cyborg” yields Harbisson’s name in the first few results.

Assertions of cyborg primacy are precarious. Over the years, news outlets have named various cyborgs “first.” In the running, too, is Steve Mann, a Canadian inventor who is considered the “father of wearable computing.” Decades before Harbisson, Mann negotiated for and won the right to fly with his implant, a self-designed computer vision device which he calls the EyeTap. According to Motherboard, however, the first cyborg was Kevin Warwick, a British engineer and professor who in 1998 had an RFID transmitter implanted in his arm that allowed him to control lamps and other nearby devices via the Internet. If you ask Discovery, the first cyborg was a man named Johnny Ray, a Vietnam veteran who, after a stroke stripped him of the ability to speak, lived for a time with an electrode implant in his brain that let him relay messages with his thoughts. Or perhaps the first cyborg was a rat.

(…)

The Great AI Awakening

Gideon Lewis-Kraus for the New York Times Magazine
18ai-cover2-superJumbo-v4

Gideon Lewis-Kraus provides an in depth exploration of Google’s innovative use of AI in translation today, and in the future of tech tomorrow:

Prologue: You Are What You Have Read

Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.

Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”

The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.

NO. 1:

Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.

NO. 2:

Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.

Even to a native English speaker, the missing article on the leopard is the only real giveaway that No. 2 was the output of an automaton. Their closeness was a source of wonder to Rekimoto, who was well acquainted with the capabilities of the previous service. Only 24 hours earlier, Google would have translated the same Japanese passage as follows:

Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.

Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service. Some were successful, others meant mostly for comic effect. As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Four days later, a couple of hundred journalists, entrepreneurs and advertisers from all over the world gathered in Google’s London engineering office for a special announcement. Guests were greeted with Translate-branded fortune cookies. Their paper slips had a foreign phrase on one side — mine was in Norwegian — and on the other, an invitation to download the Translate app. Tables were set with trays of doughnuts and smoothies, each labeled with a placard that advertised its flavor in German (zitrone), Portuguese (baunilha) or Spanish (manzana). After a while, everyone was ushered into a plush, dark theater.

(…)

Humanity’s greatest fear is about being irrelevant

Ian Tucker in conversation with Genevieve Bell for the Guardian
3000

Writing for the Guardian, Ian Tucker talks to Genevieve Bell, the Australian anthropologist working at the Intel headquarters in Oregon, to explore our anxieties about rapidly developing technology, and artificial intelligence:

Genevieve Bell is an Australian anthropologist who has been working at tech company Intel for 18 years, where she is currently head of sensing and insights. She has given numerous TED talks and in 2012 was inducted into the Women in Technology hall of fame. Between 2008 and 2010, she was also South Australia’s thinker in residence.

Why does a company such as Intel need an anthropologist?
That is a question I’ve spent 18 years asking myself. It’s not a contradiction in terms, but it is a puzzle. When they hired me, I think they understood something that not everyone in the tech industry understood, which was that technology was about to undergo a rapid transformation. Computers went from being on an office desk spewing out Excel to inhabiting our homes and lives and we needed to have a point of view about what that was going to look like. It was incredibly important to understand the human questions: such as, what on earth are people going to do with that computational power. If we could anticipate just a little bit, that would give us a business edge and the ability to make better technical decisions. But as an anthropologist that’s a weird place to be. We tend to be rooted in the present – what are people doing now and why? – rather than long-term strategic stuff.

A criticism that is often made of tech companies is that they are dominated by a narrow demographic of white, male engineers and as a result the code and hardware they produce have a narrow set of values built into them. Do you see your team as a counterbalance to that culture?
Absolutely. I suspect people must think I’m a monumental pain. I used to think my job was to bring as many other human experiences into the building as possible. Being a woman, being Australian and not being an engineer – those were all valuable assets because they gave me a very different point of view.

We are building the engines, so the question is not will AI rise up and kill us, but will we give it the tools to do so?

Now, the leadership of Intel is around 25% female, which is about what market availability is in the tech sector. We are conscious of what it means to have a company whose workforce doesn’t reflect the general population. Repeated studies show that the more diverse your teams are, the richer the outcomes. You have to tolerate a bit of static, but that’s preferable to the self-perpetuating bubble where everyone agrees with you.

You are often described as a futurologist. A lot of people are worried about the future. Are they right to be concerned?
That technology is accompanied by anxiety is not a new thing. We have anxieties about certain types of technology and there are reasons for that. We’re coming up to the 200th anniversary of Mary Shelley’s Frankenstein and the images in it have persisted.

Shelley’s story worked because it tapped into a set of cultural anxieties. The Frankenstein anxiety is not the reason we worried about the motor car or electricity, but if you think about how some people write about robotics, AI and big data, those concerns have profound echoes going back to the Frankenstein anxieties 200 years ago.

(…)

Fitz Carraldo Editions