Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcasts. We know that artificial intelligence tools are augmenting human performance, but we
But how do people really feel about that? On today's episode, find out how one company develops AI tools with end users in mind. I'm Elizabeth Ann Watkins from Intel, and you're listening to Me, Myself & AI. Welcome to Me, Myself & AI, a podcast on artificial intelligence and business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of analytics at Boston College.
I'm also the AI and Business Strategy Guest Editor at MIT Sloan Management Review.
And I'm Sherwin Kodubande, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate. Welcome.
Today, we've got a great guest, Elizabeth Ann Watkins, a research scientist at Intel. Elizabeth, thanks for joining us today. Let's get started. Thank you so much for having me today. As the world's largest semiconductor manufacturer, Intel is probably a company that most people already know. But maybe can you tell us about Intel Labs in general and maybe your role specifically?
Just like you said, we're not always top of minds in the big discussions around AI and the AI industry and the AI field right now, but there is so much fascinating work happening inside Intel and we have such a unique perspective on the field and a unique way of entering that field that I'm really excited to bring some of that to light today in our conversation.
I just joined Intel in August of last year, and already it's been a really incredible experience meeting so many different teams. I joined Intel as a research scientist in the social science of artificial intelligence and work under Lama Nachman in Intel Labs, and the group is called Intelligent Systems Research. Elizabeth, you mentioned Intel Labs is doing some unique things with AI. Do you mind sharing with us some of the things you're working on?
A project that I'm particularly excited about is called M.A.R.I.E., an acronym which stands for Multimodal Activity Recognition in Industrial Environments. So basically, I'm going to start with a metaphor. Imagine that your computer could watch you put together, say, a piece of furniture that you ordered on the internet.
When you got to a tough part of the manual or you're holding a screwdriver, you're holding a piece of plywood, and you can't get back to the manual, imagine that your computer could see what you were doing, knew what the manual was going to tell you to say, and then help you to connect to those two. Imagine that your computer could actually tell you, hey, I think you are about to screw shelf A into bracket B.
or something of that nature. And I know that every time I have received that flat pack that they say has an armchair in it, it's a really tough time for me to get from A to B. And it's processes like these that our people are doing inside of our facilities where they're actually building and manufacturing the semiconductor chips.
So the folks who work inside of our factories, our technicians, are doing very involved and very delicate work handling parts and tools for all kinds of manual operations happening on the factory floor.
And so all of the work that they're doing is just as complicated, sometimes even more complicated than getting that flat pack into an armchair. There's a lot of tools involved. There are all kinds of different processes, different pieces of equipment, different sizes of equipment. And so we are building computer systems a little bit like the one that I described that said, hey, did you mean to put screw A into bracket B? We're building systems to help the people inside of our factories do this kind of really careful and really complicated work.
Actually, that's a fun analogy. I mean, I think we all find flatbacks challenging, perhaps, though I have to admit I kind of enjoy them. But I'm sure it's much more complicated within Intel. And what I liked about that example is I feel like so often we're talking about automation. So can we get machines to learn how to do something that humans do? So it's machine learning at its core. And then we talk about augmentation and, well, how can machines help humans make a decision?
This strikes me as a little bit different. This is going the next step. So in this case, we're not trying to get the machine to assemble the flat pack. We're not trying to get the machine to assemble the semiconductor. It's still the humans doing it. And so it's the humans who are needing to learn here. That seems like pushing that a little bit further than we've talked before.
I'm so glad you picked up on that, Sam. That's exactly what we talk about inside Intel Labs is that the human is the expert. The human is the one who knows how to interact with these systems and they know how to screw the bits together and they know all of the physical and dynamic intricacies of what it takes to put these tiny and very delicate components together together.
We don't want to train a computer to do that. A computer cannot do that. The way that this both assembly and cleaning process, because of course all of these processes take place within these super clean rooms where everyone has to wear hazmat suits and gloves.
We want to support the humans who are doing it. There is no computer who could do this. It takes human judgment. It takes human expertise to do these processes in a way that is comprehensive and truly dynamic and can respond to new pressures. If a larger piece of a machine bends in a particular way or if a screw falls into a hole in a particular way, it's going to be many, many years before computers are able to assess and diagnose and fix these kinds of constant dynamic problems.
But you know who's really good at doing that is humans. And so we're trying to center humans and center human expertise. And that points to this dual reason that I'm really excited about Marie is that there's both a tech side and there's a human side. And on the tech side, I think Marie is really exciting because it's multimodal. It combines a lot of different kinds of AI systems through ambient sensing. It combines computer vision, which uses activity recognition technology,
with audio sensing combined with natural language processing in a way that can help build an ambient environment around the technician, ultimately to help them what they do, but before the system can help them what they do, the system has to actually learn. And it's the human experts that are teaching the system how to learn and teaching the system what it needs to know.
And so this is, as far as I know, a brand new way to deploy AI systems into new domains and apply them to new problems is that we are confronting and tackling one of the big challenges of AI development, which is data collection. You need a lot of data to get into a new domain. You need a lot of data that labels the appropriate way, labeled according to the kinds of problems that you're trying to solve.
And so we've kind of flipped the script. Instead of trying to get all the data we can before a system is deployed,
we go ahead and deploy a system that then works in partnership with the experts on the grounds. And they teach the system through speaking aloud and through dynamic data labeling. They teach the system what it needs to know. And so hopefully the data that is going to be used to help people to do these tasks down the line is going to be produced by the very first batches of people who are using the tool.
That seems like a great analogy, because as you're saying that, it reminded me of what you would do if it was a new human partner joining the project. You'd go through those same steps. Shervin and I have some research where we find that 60% of the people out there are thinking of AI as a coworker. And that's exactly the sort of relationship that you're describing. But it made me think of a little danger here. Let's say that when you put that system out there, it doesn't know much and you've got the humans training them. All right, I'm a human.
I feel like I'm a little bit annoyed. Like why, why am I having to work so hard to train this coworker? How do you keep that dynamic? You mentioned there being a technical side to this project, but a social side as well. That strikes me as maybe harder on the social side than on the technical side.
That's precisely what I want to talk about next because that's precisely what I get to do. I get to talk to the people about what they actually think about these systems and you are right on the money. Sometimes they do perceive it to be annoying where they're asking, wait, so I'm putting together this really complex instrument and the system asks me what I'm doing and it's wrong? So how am I supposed to spend time
not only doing my job, but also teaching this machine how to do my job. And so I am really gratified that I get to be the social scientist who is there right on the grounds. We're still deep in development and it's common for companies to do as much testing as they can, of course, before something is deployed.
And then when the actual tenants of deployment come, things such as user experience and user interface designs, those typically come closer to the end of a development process. But here, the social science and the understanding of what people need and what people might find annoying and what they need to trust a machine that is a co-worker with them is right in the DNA of the development process.
And so I get to consistently speak with the technicians that we are building the system for and ask them questions just like the one that you described, asking them, do you think this is annoying? How might this be more helpful? What are some other affordances that we can build into the system to make it more helpful for you? And also asking deeper questions, not just about their dyadic relationship with the system, but also about their larger socio-technical context of work.
Asking about like, what does your schedule look like? And what have your bosses said about the system? And is there a way that we could try to facilitate more productive relationships, not just with your bosses, but also with your other coworkers through this system? And so our goal is to build a very deep understanding, not just of one task, but an entire space of work and ask how our system can improve.
help in the best way it can to amplify human potential by being a coworker for humans within this space, but also how can it be built with an understanding of their work context in order to facilitate a long-term relationship.
Because one of these things that's demanded by the fact that we're leaning on people to provide data and leaning on people to help us actually do the data labeling is that we need them to be engaged. And we need them to like interacting with the system. And we need them to find it trustworthy and reliable and useful and understanding data.
How all of those goals can be achieved requires a deep understanding of social context and social habits and work habits and task flows. And so that's precisely what we're working on now. I think it's really interesting how you seem to be thinking beyond just this project. There's the element of, okay, how do I get this particular project in place? But underlying a lot of things you're talking about is
are things like, how do we figure out the best way to introduce essentially a new system? How do we figure out the best way for how to transform and get to our overall vision? And that transcends just a single project. How are you trying to pick up on those lessons?
That is a really great question. I am really proud of our labs and some of the big bets that we're making on human-AI collaboration and how much the input and contributions and insights drawn from social science are being picked up and have been picked up historically at Intel throughout their research project.
As we're doing research projects and building products across verticals like education and manufacturing and accessibility, there is deep investment across the teams that facilitates and encourages conversations with folks like me, with the social scientists. We often get these really fantastic teams involved.
in a room where we have engineers and data scientists and policymakers and social scientists, including not just me, but a wonderful team headed up by Don Nafis of anthropologists, as well as another team of social scientists that has psychologists. And we all work together really closely to ensure that the products we build are not just
designs to answer one problem, but that they're properly engineered around what the best solution is, that we take advantage of all the different kinds of multimodal affordances that our tools can provide, and that we're deeply understanding of the social and organizational context that all of our products are going to be deployed into. I think so often,
There's a transition that seems to be happening between when we were first talking about these models and tools and using data. Things were very heavily data and tool oriented and very heavily science oriented. And that made a lot of sense because we didn't know how to do some of these things. And now as a society, we're learning how to do many of these tools and model buildings. But a lot of the team you mentioned there, anthropology, psychology, biology,
These are not traditional things that we might have thought of being integral to producing an AI application. But those are the ones you happen to mention. Why did you pick those? I guess that's some of your background in social science, right? Yeah. Throughout my graduate studies and postdoc work, I was always really invested in the users and in people.
And I was always really fascinated by just how weird people can be and how creative and how innovative and how folks often use technologies in ways that their developers did not anticipate and did not foresee.
And while there are some dangerous elements to that, as we've seen in various misuse and dual use applications, there are also a lot of really fantastic and wonderful ways that people have figured out for technology to be more suited to them and their context or to work a little bit more smoothly for them. And so.
Seeing how Intel was different because of their history of being a semiconductor manufacturer and being deeply invested in hardware and having an ecosystem approach to the way that AI tools are deployed really showed me that they were a company that cared just like I did about
The people who are going to be using these tools and the social structures in which these people were embedded and building tools that could match not just the people who bought or use the tools, but also the lives and the communities into which these technologies were going to be interjected.
Elizabeth, clearly a lot of your background has been coming through as we've been talking. But could we step back for a second and ask you a bit about how you ended up in this role at Intel Labs? I guess my road to this role started way back, probably at the beginning of my graduate career, after I graduated MIT and started my PhD at Columbia. There was an opening for someone to contribute to a project on technology.
security and privacy behaviors among journalists. And my advisor said, "Hey, I got you an interview for this project." And I said, "I don't know anything about security and privacy." And she said, "Just go to the interview. I got you the interview. Just go." And so the night before the interview, I thought, "Oh, I better have something to talk about. PGP, pretty good privacy. Everyone's talking about PGP. I better download PGP so I can act like I know what PGP is." And I tried to figure it out, and it was hard.
I couldn't figure out, I went to the website and I was like, "The website doesn't really explain what's happening here. Where's the key? Do I download the key, but the key is also kept in a database, but I write the key on my emails?" Oh, I can't do this. I'm so bad at this. This is not for me. And I went into the interview the next day with a wonderful project lead, Susan McGregor,
And I said, you know what? This job's not for me. I tried to do PGP. It's weirdly hard. There's something about it that I just, I can't quite grok the language. And she said, I think that makes you perfect for this job because we're trying to figure out why are security and privacy so hard for people at work, especially journalists who are high value targets for attack.
from a lot of different actors. And we're trying to figure out how we can make security and privacy protocols better for them. So sounds like you're frustrated with how security is designed. And I said, yeah, that was frustrating. And she said, okay, let's do some work. So we ended up working together for several years studying journalists. And we had one particularly harrowing project studying the journalists who published the Panama Papers. And for that, they had terabytes of data
There were journalists working all over the world. None of them were co-located. And they didn't have a single breach. Never once did they have a breach in all of that data. And so we saw that as a success story. And we said, okay, we're always hearing the bad stories about breaches and attacks. We never hear the success stories. And so we got to interview the journalists who contributed to the Panama Papers and talk to them about their organizational culture and how...
Making protocols uniform across all journalists helps to instill a sense of teamwork and a sense that they were protecting each other. And seeing the power of culture and shared mission on behaviors around something as difficult as PGP and security protocols and security
It was really inspirational. And around that time, I started paying attention to facial recognition. And from there, it was a short bridge over to looking at AI and asking similar questions around why is this hard to understand? How can we communicate it to people in a more strategic and way that is more understanding of the work that they're doing and the work that they need to do?
And how can we bring some transparency to these systems in a way that is meaningful and understandable for real people? What I think is interesting about that is security is always like that. If it's a little bit hard to do, then we all tend to not do it because it's tomorrow's problem versus today's problem. If it leaks, that's something we as humans are terrible about. And I think what we've failed in many ways here is not building that into the infrastructure so it's the default.
And there's a lot of analogies for artificial intelligence. We're building things. And if we don't make the defaults easy to use or easy to do the right thing, then people will do the wrong thing. Or people will do the easy thing or the short-term thing. And you mentioned facial recognition. What are the kinds of things that we should be building into our infrastructure so we don't propagate AI-based mistakes?
That's a great question. Something that drives my research both throughout my graduate career and here at Intel is a recognition that people are the experts in their own lives and that we really need to engage with the people and with the users throughout the development pipeline.
in order to build not just systems, but solutions and solutions for the real problems that are happening on the ground. And as close as a company can get to including the expertise of social science and what social scientists can bring about rigorous and robust tools to study how people live and the kinds of languages they use and the kinds of values they have and what's truly important to them,
and what they want to protect and what they want to keep safe, as well as building in different kinds of options and different kinds of pathways into the same technology that might be used differently by people of different levels of accessibility are all things that I would love to see the entire field take up. Well, that sounds good.
I mean, I don't think much like buying low and selling high sounds great. How do companies actually do this? What steps do they need to take to make progress on these?
That's a great question. The way that we're doing it at Intel is that we are facilitating conversations between our technical teams, people who are building really amazing systems around robotics and accessibility tools and educational tools and embedding social scientists across all these teams so that we can ask some questions around, hey, what are the presumptions of this system?
Have you talked to teachers? Have you talked to the folks who are going to be using this robot on the ground? What are your conversations like with the people who are going to be served by these solutions? And I'm really lucky that I get to be in a place where we're embedding this expertise way back into the problem formulation stage and into the project formulation stage all throughout the development pipeline.
I've also been really gratified that Intel established the ethical impact assessment process along with the responsible AI counsel
And so this is a process for robustly and rigorously building into the development pipeline that teams all across Intel and across all business units. This is a way to inject the expertise, not just of social scientists, but everyone who sits in the responsible AI council, including engineers and policymakers and folks from legal and folks from standards.
and facilitate conversations with this multidisciplinary responsible AI council with development teams through their submission of the ethical impact assessment. And the ethical impact assessment is a way for us to put values into practice and ensure that values around human oversights and human rights and privacy and safety and security and diversity and inclusion
These are ways for us to make sure that those values are built in to the tools that Intel BU's are putting together across the organization. Okay, we have a segment where we ask our guests a series of five rapid fire questions. Just answer the first thing that comes to your mind. First, what is your proudest AI moment? Oh, goodness. That's a great question. I'm really proud of being able to
represent the voices of the technicians who we have on the floor to these teams of engineers and data scientists who are building these systems. And because of the process that Intel has built, where I get to sit with the engineering teams who are building the computer vision and the action recognition and the NLP and choosing the phrases and the semantic frames around tasks and about how tasks are built and how they can be recognized,
Being able to do the interviews that I do with technicians to ask them about their concerns when this technology is introduced, what we can do to make sure that their concerns are addressed.
asking what they need to know around transparency, what do they need in order to trust the system, and all in the service of enhancing the work that people are doing. The fact that I get to grab that information and deliver it consistently to the engineers and to see how quickly they respond. Like, oh, well, we can build this and we can build that. Well, what if we built this into the UI? And what if we built this into the dialogue? It
It's such an incredibly compelling process, especially after coming from academia, where I would write a paper and then it would take a year to get published. Oh, a year sounds fast. Yeah.
Right. If I were lucky and we'd take a year, I think my longest record was maybe three and a half years it took to get published. I don't know. What's your record? Ten. Ten? Yeah. Let's don't talk about it, though, because I'll get all sad. Oh, no. So you mentioned concerns that people have. What worries you about artificial intelligence? Well, one of their biggest concerns when I started talking to the technicians was that they were going to get replaced.
I said, okay, let's have a conversation. What do you think this is for? And they said, I've seen the news. I know you're trying to build a robot just to do exactly what I do. So I figure I'm training my replacement robot.
And I thought, oh no, that's terrible. And I got to have a lot of conversations with our technicians where I got to say, that's not what we're doing. We are not trying to replace you. You are the expert. We need you. In fact, we need you to teach the system so that the system can go and help other people like you.
So by you training the system, you're actually helping a lot of other folks in Intel fabs across the planet. And you are ensuring that they get help in the way that you would like to get help. Some research that Trevor and I were involved in a couple of years ago focused entirely on this organizational learning aspect that you were alluding to, that this is a way for everyone to learn more quickly and to spread knowledge. What's your favorite activity that does not involve technology?
What's not AI going on in your world? Well, there's so much happening recently. It seems tough sometimes to figure out what's AI and what's not. My personal life, I do a lot of baking. It's been a while since I made a loaf of bread. I do a lot of cooking.
I am very lucky to live with my husband in the city of New York. And so we do a lot of exploring. In fact, just getting outside. That's probably the biggest non-AI activity is walking outside on our own two feet and looking around with our own eyes. And that always feels very refreshing. What's the first career you wanted when you were young? What did you want to be when you grew up? I wanted to be an artist. And it has been a topsy-turvy career.
Winding Road. I did my undergrad at UC Irvine and studied video art. What's your greatest wish for AI in the future? There's so much that AI can do that humans cannot, but there's also so much that humans can do that AI cannot. And I think a big challenge, at least for me, and I hope the people with whom I work going forward for the next few years is going to be figuring out exactly what that balance is.
And how can we systematize what do people really need help with that AI systems can do, but with a really thorough understanding of what it is that people do and how they do it. Now that the AI systems are becoming so advanced, but oftentimes in a lab or in a vacuum, as they mature into the real world,
I think it's really exciting what they can do, but we're facing a lot of work in getting them there. And I think it's going to be other social scientists and multidisciplinary teams like the ones that we have at Intel all working together to make sure that these systems can really be deployed as solutions.
That's a good cap for the episode. Thanks for talking to us. I think one of the things that's come through, you're talking about projects and things you're working on, but you're also giving us a hint about what's going to happen in the future as we move off of the pandemic.
let's say tool focused. Can we get the ML right? Can we get the model right? Can we get the data right? You're talking a lot about what happens next. What happens once we check off some of those check marks? What are those checks? And I think a lot of people can learn from that. Thanks for taking the time to talk with us today. Oh my gosh, it's been such a pleasure. Thanks for listening. Next time, Sherva and I speak with David Hardoon from Aboitus Data Innovation. We're once again talking about chemical engineering, so you won't want to miss this one.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.