Category: Science

Ghost in the Cloud

Meghan O'Gieblyn for n+1

Meghan O’Gieblyn writes for n+1 on the relationship between transhumanism and religion.

I DO PLAN TO BRING BACK MY FATHER,” Ray Kurzweil says. He is standing in the anemic light of a storage unit, his frame dwarfed by towers of cardboard boxes and oblong plastic bins. He wears tinted eyeglasses. He is in his early sixties, but something about the light or his posture, his paunch protruding over his beltline, makes him seem older. Kurzweil is now a director of engineering at Google, but this documentary was filmed in 2009, back when it was still possible to regard him as a lone visionary with eccentric ideas about the future. The boxes in the storage unit contain the remnants of his father’s life: photographs, letters, newspaper clippings, and financial documents. For decades, he has been compiling these artifacts and storing them in this sepulcher he maintains near his house in Newton, Massachusetts. He takes out a notebook filled with his father’s handwriting and shows it to the camera. His father passed away in 1970, but Kurzweil believes that, one day, artificial intelligence will be able to use the memorabilia, along with DNA samples, to resurrect him. “People do live on in our memories, and in the creative works they leave behind,” he muses, “so we can gather up all those vibrations and bring them back, I believe.”

Technology, Kurzweil has conceded, is still a long way from bringing back the dead. His only hope of seeing his father resurrected is to live to see the Singularitythe moment when computing power reaches an “intelligence explosion.” At this point, according to transhumanists such as Kurzweil, people who are merged with this technology will undergo a radical transformation. They will become posthuman: immortal, limitless, changed beyond recognition. Kurzweil predicts this will happen by the year 2045. Unlike his father, he, along with those of us who are lucky enough to survive into the middle of this century, will achieve immortality without ever tasting death.

But perhaps the Apostle Paul put it more poetically: “We will not all sleep, but we shall all be changed.”

(…)

Humanity’s greatest fear is about being irrelevant

Ian Tucker in conversation with Genevieve Bell for the Guardian

Writing for the Guardian, Ian Tucker talks to Genevieve Bell, the Australian anthropologist working at the Intel headquarters in Oregon, to explore our anxieties about rapidly developing technology, and artificial intelligence:

Genevieve Bell is an Australian anthropologist who has been working at tech company Intel for 18 years, where she is currently head of sensing and insights. She has given numerous TED talks and in 2012 was inducted into the Women in Technology hall of fame. Between 2008 and 2010, she was also South Australia’s thinker in residence.

Why does a company such as Intel need an anthropologist?
That is a question I’ve spent 18 years asking myself. It’s not a contradiction in terms, but it is a puzzle. When they hired me, I think they understood something that not everyone in the tech industry understood, which was that technology was about to undergo a rapid transformation. Computers went from being on an office desk spewing out Excel to inhabiting our homes and lives and we needed to have a point of view about what that was going to look like. It was incredibly important to understand the human questions: such as, what on earth are people going to do with that computational power. If we could anticipate just a little bit, that would give us a business edge and the ability to make better technical decisions. But as an anthropologist that’s a weird place to be. We tend to be rooted in the present – what are people doing now and why? – rather than long-term strategic stuff.

A criticism that is often made of tech companies is that they are dominated by a narrow demographic of white, male engineers and as a result the code and hardware they produce have a narrow set of values built into them. Do you see your team as a counterbalance to that culture?
Absolutely. I suspect people must think I’m a monumental pain. I used to think my job was to bring as many other human experiences into the building as possible. Being a woman, being Australian and not being an engineer – those were all valuable assets because they gave me a very different point of view.

We are building the engines, so the question is not will AI rise up and kill us, but will we give it the tools to do so?

Now, the leadership of Intel is around 25% female, which is about what market availability is in the tech sector. We are conscious of what it means to have a company whose workforce doesn’t reflect the general population. Repeated studies show that the more diverse your teams are, the richer the outcomes. You have to tolerate a bit of static, but that’s preferable to the self-perpetuating bubble where everyone agrees with you.

You are often described as a futurologist. A lot of people are worried about the future. Are they right to be concerned?
That technology is accompanied by anxiety is not a new thing. We have anxieties about certain types of technology and there are reasons for that. We’re coming up to the 200th anniversary of Mary Shelley’s Frankenstein and the images in it have persisted.

Shelley’s story worked because it tapped into a set of cultural anxieties. The Frankenstein anxiety is not the reason we worried about the motor car or electricity, but if you think about how some people write about robotics, AI and big data, those concerns have profound echoes going back to the Frankenstein anxieties 200 years ago.

(…)

Medicine and its Metaphors

An extract from Eula Biss's On Immunity

An extract from Eula Biss’s On Immunity: An Inoculation, forthcoming from Fitzcarraldo Editions in February 2015, appears in Guernica this month:

Whenever I complained of a sore throat as a child, my father would press his fingers gently behind my jawbone, checking for swollen lymph nodes. “I think you’re going to be okay,” he would say upon completing his examination. This was his verdict, too, when I called him from college, miserably ill with what he identified as “probably influenza.” I asked him if there was anything I could do and he suggested, to my disappointment, drinking plenty of fluids. Then he recommended his grandmother’s prescription for a bad cold—buttered toast dipped in warm milk. He described the way the butter floated on the surface of the milk and how comforting he found his grandmother’s care. I wanted to know if there was some sort of medicine I could take, but what I needed, my father understood, was comfort. As an adult, I still never cease to feel a little surprise when a doctor reaches behind my jawbone to check for swollen nodes. I associate the tenderness of that gesture with my father’s care.

Paternalism has fallen out of favor in medicine, just as the approach to fathering that depends on absolute authority no longer dominates parenting. But how we should care for other people remains a question. In his discussion of efforts to control childhood obesity, the philosopher Michael Merry defines paternalism as “interference with the liberty of another for the purposes of promoting some good or preventing some harm.” This type of paternalism, he notes, is reflected in traffic laws, gun control, and environmental regulations. These are limits to liberty, even if they are benevolent. Interfering with the parenting of obese children, he argues, is not necessarily benevolent. There is risk in assigning risk. Children who are already stigmatized for their body type are further targeted. And families who are identified as “at risk” for obesity become at risk to discriminatory oversight. The prevention of risk, Merry observes, is often used to justify a coercive use of power.

 

Fitz Carraldo Editions