Artificial Intelligence Is Coming. What Does That Mean for Us? It’s time to talk about how artificial intelligence may reshape our lives—and how humans can maintain control.

Artificial Intelligence Is Coming. What Does That Mean for Us? It’s time to talk about how artificial intelligence may reshape our lives—and how humans can maintain control.

Wednesday, February 27, 2019

This article was originally published to Education Development Center's Newsroom on February 27, 2019.

Throughout history, advances in technology have drastically changed how humans live and work. But today, innovations such as artificial intelligence, big data, robotics, and biotechnology are so revolutionary—and occurring so quickly—that they have the potential to upend whole industries and entire communities. How we interact with these advanced technologies may also challenge long-standing beliefs about the divisions between human and machine.

While many futurists have speculated on how these technologies could impact human life (for better or worse), EDC’s Joyce Malyn-SmithPaul Goldenberg, and Mary Fries are interested in examining the ethical choices that these innovations pose. In this roundtable discussion, they explore why educators and students should be discussing what it means to be human in the age of artificial intelligence, as well as the perils of ignoring this topic.

Q. At a time when many people are trying to predict how advanced technologies will reshape the world, you are looking at the future from a human angle. Why?

 

Malyn-Smith: I think the question, “What makes us human?” is worth discussing more than ever. We’re entering a world where the human factor and the tech factor are much more convergent. You see this in manufacturing, where machine learning is enabling engineers to consider entirely new solutions to design challenges. You also see advances in medical technology, such as smart prosthetics, that actually enhance human functioning. So as human bodies begin to integrate machinery, and as humans begin to rely more on artificial intelligence and machine learning to build our world, then I think we have to be prepared for some of the ethical questions that will undoubtedly arise.

 

Goldenberg: For me, the question is, How can we understand this new technology well enough so that our voices and votes can shape what we do with it? We are more than just bystanders to our future. We are active participants in it. So in that sense, this question of what it means to be human in this age is really a civics question. The social implications of technology are partly why we need civics education even more.

Q. The three of you want to develop a curriculum that addresses some of these issues. Why do you think this topic is well suited to the classroom?

 

Fries: We’re still in a position, as a species, to define what the future means. Do we just create these technologies blindly and see what happens, or do we shape them toward some future that we want for ourselves? I’m interested in seeing what it means for students to be thinking about those questions.

Goldenberg: Some of these questions are already relatable to kids. For example, guidance counselors help high school students identify potential colleges, but now computer programs can do the same thing, based on a student’s “portrait” built by a machine learning algorithm. And because the algorithm’s advice is based on tons of data, students may think it knows more than the guidance counselor and is free of personal biases that humans may have.

Fries: But there’s a lot of research showing that machine learning is actually learning to be biased, because it draws data from human systems that are fundamentally unequal in the first place.

Goldenberg: Exactly. And that’s why I think students should be thinking about these questions. Algorithms that are intended to reduce bias may, in fact, be institutionalizing bias.

Malyn-Smith: I’m also interested in asking provocative questions that push students to think about their role with regard to technology and society: What do humans do best? What do machines do best? Are advanced technologies changing what it means to be human? What do “privacy” and “identity” mean in an age of big data? Is artificial intelligence our friend or our enemy? How are these affecting decisions we make every day in a technology-driven world?

Goldenberg: I agree with these, but artificial intelligence as “friend or enemy” feels too dichotomous. It’s both. Just like atom splitting and gene editing. Any technology can be used for good or for harm. I think the real questions are, “What do we want from artificial intelligence?” and “What policies will we support to make new technologies more friend than enemy?”

Malyn-Smith: That’s a good point. It’s a two-sided coin.

Q: For inspiration, you are looking back at EDC’s Man: A Course of Study (MACOS) curriculum from the 1960s. Why does that curriculum appeal to you?

Fries: MACOS placed students in the role of anthropological researchers. Through watching videos and reading stories, they learned about the way people lived. They came to their own conclusions about what it meant to be human, what constituted a community. That approach remains relevant now. There’s a real need for students to consider what it means to be human in the age of technology and to have those conversations guided by the questions and realities of their world.

Q. What could happen if we don’t consider what it means to be human in this new era?

Malyn-Smith: We could lose control, which many futurists fear could happen. Machines are learning to think like humans, and we’re finally beginning to understand the importance of learning to think like computer scientists. We need to understand better what humans bring to this relationship—what we do, and will continue to do, better than computers. We have to get up to speed on what we want human-technology relationships to look like or else we may lose control of our evolving partnership with machines.

Goldenberg: We can’t ever have full control of how new technologies will be used, and our discussions with kids need to be honest about that. But not being able to take full control shouldn’t lead kids to feel powerless and give up on trying to create the type of world we want to live in. Look at climate change. It’s already causing big problems, but that doesn’t mean it’s too late to take action. Our work with educators and kids must be nuanced—serious but hopeful.

Fries: It’s hard for people to be comfortable in that gray area, isn’t it? You can’t have control, and yet you can’t give up. And I think what makes this tension worse is that we are talking about artificial intelligence and big data—these are things that are massive and abstract. There is this feeling of being lost at sea.

Malyn-Smith: But as educators, we are the ones who can help students think about what’s coming. We can help them learn to live and work at the human-technology frontier. We can help them figure out what it means to be human in the age of AI, big data, and advanced technologies. Yes, people will have to confront some really serious issues as we move into the future. So the question is, how do we prepare?