Published: 
By  Wende Whitman
James Villarrubia speaking at a podium
Alum James Villarrubia, an applied A.I. expert and current NASA Presidential Innovation Fellow, was the plenary speaker at the IEEE Systems and Information Engineering Symposium hosted by the Engineering School and held at UVA’s Darden School of Business. (Photo by Sydney Koerber)

A graduate of the University of Virginia School Engineering and Applied Science warned UVA students earlier this month that, as artificial intelligence increasingly fills in all the answers for us, we should not lose touch with our natural curiosity.

James Villarrubia, an applied A.I. expert and current NASA Presidential Innovation Fellow, delivered his cautionary lecture on “Curiosity in the Age of AI” on May 3. The thought-provoking talk opened the annual IEEE Systems and Information Engineering Design Symposium. SIEDS is a student-focused international forum.

Villarrubia double-majored at UVA, earning a Bachelor of Science with honors in systems engineering as well as a government degree in 2008. While an undergraduate, he was both a Rodman Scholar and a Jefferson Scholar. He also holds a master’s degree from UVA in public policy.

The alumnus describes himself on X as a “tech, AI and education startup nerd.” He has provided policy advice to the White House, the Department of Defense and the Justice Department.

UVA Engineering caught up with Villarrubia following his remarks. We asked him in a few questions to summarize the thoughts he shared. 

Q. What should the average Joe or Josephine know in terms of how our culture will evolve with AI?

A. I try to explain to people that AI is really an abstraction around knowledge. Never before in our history have we had this level of change. You can think of maybe a couple of parallel moments, like the printing press and the internet. 

So the printing press expands knowledge, the internet expands knowledge, but even to, to sort of peruse that knowledge required effort and understanding. Now the knowledge is actually coming to you. You just have to ask. The level of effort required to learn something, to be engaged, is removed. 

But with things like homework, there’s value to saying, “I'm going to make you struggle, because there's no quick way to understand this.”

You have to sort of struggle and it has to be hard. That's the point. That is the learning. 

Will humanity lose its interest in being curious? Will the simple and quick solution be the one we just default to? And will that mean we lose our ability to sort out the implications, the gaps and the missteps? 

Because anyone on the AI side can tell you, AI makes mistakes all the time. 

Q. What’s your advice for how to handle that?

A. If you want to be prepared for this coming world, you should play with AI enough until you see it fail. Try to understand why it's making that mistake. Once you get that far, then you at least have a good framework for going into the world and seeing that, yeah, it's good, but it's not perfect. I can't trust it implicitly. And as long as you have at least enough understanding to have that sort of discretionary view of AI, you will be prepared to handle this new world. 

Q. How can someone serve their community as a systems engineer in this new age?

A. Systems generally is a bit more about solving problems wherever they may lie and thinking about the unifying pieces. There are all these connection points that are, systemically, really important to the future of humanity. 

What I do, and what our team does at NASA, is to try to connect those things. Try to imagine big, you know, scary futures, but then also say, OK, what are all the pieces that we have now that we can put together to solve these problems? 

You have to be willing to dig in and be curious with questions like: What does water access in Sub-Saharan Africa have in common with water access in Flint, Michigan? And what does that have to do with, maybe, filtration mechanisms and osmosis research in Mexico? 

That's what's awesome about systems engineering. It trains people to think big. 

Q. Can you give a real-life example of how seemingly unrelated things dovetailed to create a positive solution?

A. Yes, my favorite example: bread and cancer.

Before a lot of the fancy generative AI tools and deep learning tools around images were being produced, there was work being done by an engineer in Japan named Hisashi Kambe. He was originally tasked with identifying the types of pastries that were being sold in grocery stores, because in Japan there were a lot of different types of pastries, and the cashiers were having trouble checking out customers. You can imagine how difficult it was trying to remember the prices of 300 types of types of bread or types of cheesecake. So they asked Kambe to come up with a camera scanner that would ring up these bread products on behalf of the cashier.

And it became this huge and complex product. It took him five years to crack it. It was intense work and almost bankrupted his company, but he eventually did crack it. He delivered this project in 2013. And that's when deep learning techniques that we think about in the Western world were still just getting started. But here's this guy over in Japan with this incredible tool, except it's just for bread. 

What I think is more important is that it was popular enough in Japan that eventually a doctor saw a television segment about it, and he reached out to Kambe and said, “Hey, you know, this bread scan thing is interesting. Cancer cells under microscopes sort of look like bread rolls. Can we work together?” 

So you've got this oncologist and this guy who builds a bread scanning tool, and they're working together. But despite being from totally disconnected fields, their shared project helped improve cancer detection all across Japan. 

Q. That’s a great explanation. So, sort of like peanut butter and chocolate — two unexpected things that went well together?

A. Yes. The peanut butter and chocolate of research.

It is just this perfect, simple example of something totally absurd that you wouldn't think about: someone in baked goods helping impact healthcare. 

Q. How do you suggest students find the next equivalent of that bread scanner?

The most important thing is for students not to focus too much on the math or the spreadsheets, but to also focus on understanding the people and the problems around them, and to try to draw connections across fields. Because usually there is something interesting just next door that might actually be the solution you are looking for. 

There’s power in drawing those connections. There’s power in being curious.

Systems Thinking in Action: Undergraduate Research Takes On Complex Problems

The projects presented at this year’s IEEE Systems and Information Engineering Design Symposium explored alternative solutions to a wide range of problems.

Questions? Comments?

Office of Communications

The Office of Communications is charged with keeping all stakeholders well informed about the School's mission, vision, activities, progress and achievements. 

Web Issue? SUBMIT A REQUEST