Lila Ibrahim of DeepMind: “It’s hard not to go through the impostor syndrome”

Business

[ad_1]

Lila Ibrahim is the first chief operating officer in history DeepMind, one of the best known artificial intelligence companies in the world. He has no formal background in AI or research, which is the company’s main job, but oversees half of its workforce, a global team of about 500 people, including engineers and scientists.

They are working on a unique, rather amorphous mission: to build a general artificial intelligence, a powerful mechanical version of the human brain that can advance science and humanity. Its task is to turn this vision into a structured operation.

“It is difficult not to go through the syndrome of impostors. I’m not the AI ​​expert and here I am, working with some super smart people. . . It took me a while to understand anything beyond the first six minutes of some of our research meetings, “he says.” But I realized they weren’t hiring me to be that expert, but they had hired me to contribute the my 30 years of experience, my human aspect of understanding technology and impact, and doing so in a fearless way to help us achieve that ambitious goal. ”

The 51-year-old Lebanese-American engineer joined DeepMind in 2018 and moved his family to London from Silicon Valley, where he had been chief of operations for the online education company Coursera, during 20 years at Intel. Before leaving Intel in 2010, she was Craig Barrett’s chief of staff for an 85,000-person organization and had just had twins.

As an Arab-American in the Midwest and an engineer, Ibrahim was “always the stranger.” At DeepMind he was also an outsider: he came from the corporate world, having worked in Tokyo, Hong Kong and Shanghai. It also has a non-profit organization, Team4Tech, which recruits volunteers from the technology industry to improve education in the developing world.

DeepMind, headquartered in London’s King’s Cross, is led by Demis Hassabis and a predominantly British leadership team. In his three years of attendance, Ibrahim has overseen the duplication of his staff in more than 1,000 in four countries and is addressing some of the most thorny questions in AI: how are advances made with commercial value? How do you expand the talent pipeline to the most competitive technology job market? And how is responsible and ethical AI invented?

Ibrahim’s first challenge has been how to measure the success and value of the organization, when it doesn’t sell tangible products. Acquired by Google in 2014 for £ 400 million, the company lost £ 477 million in 2019. Its £ 266 million revenue that year came from other Alphabet companies like Google, which pay DeepMind for any app commercial AI that develops internally.

“Having passed a public company board before, I know the pressure Alphabet has. In my experience, when organizations focus on the short term, you can often be fooled. Alphabet needs to think both in the short and long term in terms of value, ”says Ibrahim. Take WaveNet, which is DeepMind technology now integrated into Google products [such as Google Assistant] and the Euphony Project. ”This is a text-to-speech service where the ELA is located [motor neuron disease] patients can retain their voice.

These applications are developed primarily through the DeepMind4Google team, which works exclusively on marketing its artificial intelligence for the Google business.

She argues that DeepMind has as much autonomy from its parent company as it “needs so far,” structuring, for example, its own performance management goals. “I have to say, when I came in I was curious, will there be any tension? And there wasn’t,” he says.

Another significant challenge has been hiring researchers in a competitive job market, where companies like Apple, Amazon and Facebook compete for AI scientists. Anecdotally, it was reported that senior scientists could be paid in the region of £ 500,000, with a few million commanders. “DeepMind [pay] it’s competitive, no matter what level and position you have, but it’s not the only reason people stay, ”says Ibrahim.“ Here people care about the mission and see how the work they do moves the mission forward. [of building artificial general intelligence], not only in itself, but also as part of a larger effort ”.

The third challenge Ibrahim has focused on is translating ethical principles into the practical aspects of DeepMind research in AI. Increasingly, researchers are highlighting the risks posed by AI, such as autonomous killer robots, and issues such as the replication of human biases and the invasion of privacy through technologies such as facial recognition.

Ibrahim has always been driven by the social impact of technologies. At Intel he worked on projects such as bringing the Internet to isolated populations in the Amazon rainforest. “When I had my interview with Shane [Legg, DeepMind co-founder]I went home and thought: could I work in this company and sleep my twin daughters at night knowing what my mother worked for? ”

DeepMind’s sister company, Google, has faced criticism for how it has addressed ethical concerns in AI. Last year Google allegedly fired two of them ethical AI researchers, According to Timnit Gebru and Margaret Mitchell, for having suggested that language processing AI (which Google also develops) may echo the bias of human language. (Google described Gebru ‘s output as a “resignation”.) The public consequences provoked a crisis of faith among the AI ​​community: Are technology companies like Google and DeepMind aware of the potential damage from AI and do they have any intention of mitigating it?

To this end, Ibrahim created an internal team of social impact from various disciplines. It meets with the company’s core research teams to discuss the risks and impacts of DeepMind’s work. “You have to constantly review the assumptions. . . and the decisions you have made and update your thinking based on that, ”he says.

He adds that “if we don’t have experience around the table, we incorporate experts from outside DeepMind. We have brought people from the security space, privacy, bioethics, social psychologists. It was a cultural barrier for [scientists] to open up and say “I don’t know how this can be used, and I’m almost scared to guess, why and if I’m wrong?” We have done a lot to structure these meetings so that they are psychologically safe. “

DeepMind has not always been cautious: in 2016 it developed a lip-reading system with hyper-accuracy from videos, with possible applications for the deaf and blind, but did not recognize the security and privacy risks for people. However, Ibrahim says DeepMind now takes much more into account the ethical implications of its products, such as WaveNet, its text-to-speech system. “We thought of possible opportunities for misuse. Where and how we could mitigate them and limit their applications, ”he says.

Ibrahim says part of the job is to know what AI can’t solve. “There are areas that should not be used. For example, surveillance applications are a concern [and] lethal autonomous weapons “.

He adds: “I often describe it as a moral vocation. Everything I had done prepared me for this moment, to work on the most advanced technology so far and [on] understanding. . . how to use “.

Three questions for Lila Ibrahim

Who is your leadership hero?

Craig Barrett. I was Intel’s chief of staff and at the time he was CEO. He followed in the footsteps of Bob Noyse, and Andy Grove and Gordon Moore. . . they were legends of the semiconductor industry. Together, we were doing a lot of pioneering work, such as getting internet connectivity to remote parts of the world that they never had access to. He would say, “If anyone makes you shit, come talk to me because I have my back.”

What was the first leadership lesson you learned?

There were a lot of people in the organization asking [my work]. I had problems with some [Barrett’s] direct reports, senior executives. He sat me down and said, “Lilac, search engines always end up with more arrows in the back than in the front, because everyone is always trying to catch up.” He said, “Let me pull out these arrows so you can run faster.” It’s like guiding, I want people to try it and not be afraid to make mistakes. The reason I can do this is because at the beginning of my career, my leading hero did this for me.

If I weren’t a CEO / leader, what would it be?

The first job I wanted was as president of the US, but probably more diplomatic these days. Bringing people together and understanding their differences to move things forward is something I realized I had always been passionate about. It’s about finding similarities where the obvious is different.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *