<-- End Marfeel -->
X

DO NOT USE

Meet Dr. Ayanna Howard: Roboticist, AI Scientist, and Old School #Blerd

It’s not every day you meet a sister who not only builds robots, is an expert in Artificial Intelligence, and worked for NASA’s Jet Propulsion Lab, but is also a down-to-earth, humorous, old-school Blerd (Black nerd) who was inspired by The Bionic Woman, Wonder Woman, and all things Sci-Fi as a little girl.

View Quiz

Today, Dr. Ayanna Howard is a respected roboticist and a Motorola Foundation Professor in the School of Electrical and Computer Engineering at Georgia Tech’s Institute for Robotics and Intelligent Machines.

[Related: Diversifying Google: Meet Three Black Google Engineers]

She received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, and her Ph.D. in Electrical Engineering from the University of Southern California. Her area of research focuses on humanized intelligence (what we informally call “Artificial Intelligence”). She is renowned for creating robots for studying the impact of global warming on the Antarctic ice shelves.

At NASA’s Jet Propulsion Lab, she led research on various robotics projects and was a senior robotics researcher, eventually earning NASA’s Honor Award for Safe Robotic Navigation Task, among many other distinguished science awards and honors.

BlackEnterprise.com interviewed Dr. Howard about her early days, about Artificial Intelligence, and her thoughts on diversity and inclusion in STEM.

BlackEnterprise.com: How did you find your way into robotics and AI/humanized intelligence research?
Dr. Howard: Robotics has been something I wanted to do since middle school. I was a Sci-Fi nut. I loved the original Star Trek

. The Next Generation was okay, but nothing like Kirk.

I remember watching. I wanted to do something in Sci-Fi. Wonder Woman, The Bionic Woman … I was totally fascinated. I wanted to be The Bionic Woman, which of course, is not a career.

I started working at JPL (NASA’s Jet Propulsion Lab) after my freshman year in college. That’s when I became involved in programming. I was never classically trained as a computer scientist. I had to learn db4 and Pascal. JPL – they do robotics. So, I paired up with a group focused on AI. When I started grad school I had to figure out if I could use what I was doing at JPL with what I was doing at grad school.

It’s one of those fields where you simply don’t see many, if any, women of color. Do you feel like an anomaly, and if so, how do you deal with that? Or is it something you don’t really think about?
It would affect me – when you go into a room and there is no one in there that remotely looks like you. It does affect you when you are younger; when you get questions from others, “Are you supposed to be here?”

You think, “Maybe there is a reason why I am the only one.” You need people to say, “Yeah, you can do it!”

My mom always called me stubborn. You told me I couldn’t do it is best way for me to try and figure out how. I wanted to do a Ph.D., I was challenged.

Can you talk about robotics without talking about AI? Are the two independent areas of research?
You can talk about AI outside the domain of robotics because intelligence and learning can be applied to computers; one that learns how you type, for example. It’s a learning system, not a robotics system.

(Continued on next page)

At a CES 2016 panel on AI, there was a discussion that AI is moving away from “science project” territory and becoming something with more practical, real-world application? Do you agree?
I do. Although, I don’t think people realize it. For example, if you use your phone and you use Siri and you are always asking for a new Thai restaurant in Atlanta, eventually it learns, “This person is not interested in Chinese or Soul food.” We don’t even think about it – it learns as you use it. If you go to Google and you search on different machines you get different results.

There was also the discussion that the use of the term “artificial” is outdated and not quite accurate … that there needs to be a new way to think about AI. Your thoughts on that?
I never use the term “AI.” I use the term “humanized intelligence.” The whole aspect of

intelligence is that learning is done in the context of people. It’s our environment. We are using these systems to enhance our quality of life. We would not be happy with an artificial system that does stuff that might be optimal but not in the way we do things.

What do you see as the difference between business uses of AI versus consumer uses?
I see at least in the startup space, a lot of the companies getting investments are in the data-mining space. Look at Netflix – that’s enterprise learning people’s preferences – [to] deliver ideal content. Machines really help out our own quality of life, on the consumer side. For me, it’s my own personal preference – individuals listen to one song [for example] and then with preferences, the next time [you sign-in] you get better [selections].

It’s almost kind of scary … once you use these learning apps they are pretty good at “getting it” in a short time. Algorithms are getting better, and of course there is more data.

Steven Hawking, Bill Gates, and other tech leaders wrote a letter about the danger of AI after the military announced it was funding research to develop these autonomous, self-aware robot soldiers. Hawking wrote,”humans, limited by slow biological evolution,” couldn’t compete and would be superseded by AI and that AI could be more dangerous than nuclear weapons. What are your thoughts on this?
I am on the

other side of the camp. [AI] is no more dangerous than any other kind of tech. If I give you anything, there is a good and a bad; by nature we have good people and bad people. You can’t stop that.

The problem is if you say ‘no,’ the good people cannot work on it. All you have in society are those who are not following the rules – just creating the bad. So then, we are destined to go down the path we don’t want to go down. We can create tech that is good and has social impact.

What will AI be like in 20 years?
I do predict that it will just be “programs” not called “intelligence.” I see learning intelligence algorithms integrated in any tech you can think of; appliances, cars, phones, to our education system. And I also see it integrated into hardware; into robotics, trains – physical things … as well as [continued integration] into smart homes.

Finally, do you see, especially as a professor, progress in the numbers of minorities in STEM studies or careers?
I do, but [there’s] a caveat. It’s better – and although I see an increase in the number of females and minorities, it still doesn’t reflect the demographics, there is still that gap. Is it widening if you include the world’s demographics? Yes, that gap might be widening, but if I look at year-to-year increase, it’s better.

Show comments