Stay tuned after today's episode to hear salmoni breakdown the key points made by our guest.
We can all certainly learn from company's successes with A I. But what about from their failures? On today's episode, we speak with one leader who encourages organza to share the bad along with a good, and hopes that we can all learn together.
I wear bec. Finally, from partnership on A I and you are listening to me, myself and A I welcome to meet .
myself in A I A podcast on artificial intelligence and business. Each episode we induce you to someone innovating with the eye. I'm same rand bothered professor vinyard.
S A boston college. I'm also the A I and business tragedy gas editor. M I T saw management review.
and i'm shoving koto banda, senior partner with B C G and one of the leaders of R A I business together. M I T S M R N bcg have been researching and publishing since twenty seventeen, interview in hundreds of practitioner's and surveying thousands of companies on what IT takes to build and to deploy and scale I capabilities and really transform the way organizations Operate. Hi everyone.
Thanks for joining us today. Someone I are very happy to be speaking with rebec of finally, C E O of partnership on A I the organization is a nonprofit that brings together a community of more than one hundred partners to create tools, recommendations and other resources to ensure we're all building and deploying a ee solutions basically rebeca. This is super exciting and important work, and we'd love to be speaking to you about IT. Thanks for joining the show.
Thank you so much for having me. I been looking forward to this conversation.
wonderful. So let's get started. Tell us more about the organization, mission and purpose.
The partnership on A I was formed in twenty sixteen with the belief that we needed to bring diverse perspectives together in order to address the ethical and responsible chAllenges that come with the development of artificial intelligence, and also to realize the opportunities to truly ensure that the innovation of A I benefits people and communities. And so with that belief in mind, a group of companies and civil society advocates and researchers came together to charge out a mission to build a global community that has now come together for many years focused on ensuring that we're developing A I that works for people, that works for workers, that drives innovation, that is sustainable and responsible privacy, protecting and really enhancing equity and justice and shared prosperity.
Maybe give us some examples or some examples of the companies and the type of research.
the very first investment that was made into the partnership on A I was by the six large technology companies, so that amazon and apple, microsoft, facebook now matter. Google, deep mind and IBM and IT was really at that moment when this new version of A I, or what was new then, you know, deep learning, predictive A I, that wave of A I was really starting to be deployed in internet search mechanisms, in mapping mechanisms and recommendation engines. And the realization that there were some important ethical questions that needed to be answered.
And so that brought together a whole group of other private sector companies, but also organizations like the aclu and research institutes at berkeley, stanford and harvard and internationally as well. So organizations like the lunching institute and and in the UK and beyond. And so that group came together. And now we have a number of different working groups that are really focusing both on the impact of that predictive AI, but even more importantly, the potential impact of general AI foundation in frontier models.
What are some examples of some progress that you feel like the partnership is made? What are some specifics here.
particularly in this area, is clear that we need to think about IT through what we would call a socio technical lands. Yes, there are technical standards like water marking or standards like c 2, pa. That are thinking about how you clearly track the authenticity of a piece of media through the cycle.
But you also need to think about what are the social systems and structures in place. So one of the efforts that we developed and now have eighteen organization signed up and evolving with us is the framework for responsible development of sthetic media. And that is really looking across the value chain. What are the responsibility E S.
Of creators, developers, deployments, that platforms and otherwise, when IT comes to thinking about how to disclose appropriately to make sure that whoever comes in contact with the media that is developed is aware that IT is A I generated in some way, and also to make sure that the media that is being developed is not being maliciously used or being used in any way to harm. And we have a whole series of, listen information about what those arms are and why we need to be protecting people from them. So that's a really important effort.
And of course, the question is, how is IT being used? And so one piece of this work that we've been doing is to ensure that the companies and organizations and media agencies that have signed up to support this work are really being transparent about how they are using this to respond to real world circumstances. And we so we make that available. And the goal is, yes, both to be accountable in terms of the framework itself, but also to try to create case studies that other organizations can use and learn from as well and makes a .
lot of sense. I guess, I what I was particularly interested in is the aspect of deployment there. So you know, my mind, it's really hard to imagine the fine companies that you mention.
They were part of your origin story. They have enough economic incentives not to the say this, pick deep fakes as an example, if not to create this sort of media. And so I guess i'm less worried about them being involved in the creation.
But you also mentioned deployment, which I think is is really where they have a huge role. And that seems like this is a case where maybe the people the partnership is focusing on are probably what we would call the people who are likely to behave well in the first place from that generation standpoint. How do you reach the people who are not, say not inclined or not in size to behave? Well.
great question. None of the works that we're doing at the partnership on A I in any way should stop appropriate regulation. I've always been a supporter of governments attending to and being aware of and acting upon harms and ensuring that citizens are protected.
And so regulation is a key part of thinking about an innovative accountability ecosystem. But at the same time, we do think it's really valuable to be helping those organizations that do want to be good and responsible actors to know what good looks like. It's both helpful in terms of companies, in terms of the work they are doing is also helpful, I think, to ensure civil society and academics have a seat at the table and saying what good looks like. And then we also are finding that is very helpful for policymakers. So helping them to Better understand the details of the emerging technology and where regulation is appropriate and useful is part of the work we do as well.
What are the partnerships views on impact on workforce?
Yeah this, I think is one of the most fundamental areas that we all need to be thinking about. And it's funny that it's the one area in the whole A I development conversation that has this sort of air of and inevitability to IT. You know what sort of like the robots are coming.
They're taking our jobs. We need to look at ubi. We need to think about all these other mechanisms as well. And so we've really rejected that inevitability and said, no, there are choices that firms and employers and labor organizations and policymakers can make to ensure that A I is developed to augment workers and to ensure that workers voices are at the table when thinking about developing this technology. And this is a very complex question.
Now it's a question of education and reskilling, but it's also a question of how are we measuring and evaluating these systems to ensure that we're focusing on development that works for people. We issued a set of guidelines. We call them the shared prosperity guidelines.
Really, we're trying to get very specific. They were based on a series of interviews and some research that we'd done with workers themselves to Better understand when and how AI deployed into their workplaces was beneficial and when IT wasn't. And so .
beneath facial to who beneficial .
to the workers, there are all sorts of times when workers realized the value in having AI systems working with them. That is so that they can be more creative, so that they can make more decisions, so they can be more innovative in the way in which they approach their work when it's not working for them, of course, as when they feel as if they're being driven to rote tasks or repetitive task or being surveyed in some way through the system.
So taking those insights and saying, okay, so how do we make choices, clear choices when we're deploying these systems to ensure that we're doing IT in a way that works for workers? And so I think you know, if you are an employer and thinking about how do you even begin to restore with these questions of how to use generative A I in the workforce, you need to be thinking about IT in a very experimental way, right? Rather than thinking, all right, it's gona save me these cost, let's deploy IT, you know, one hundred percent in this direction, thinking about IT as IT wait as what a pilot, new technologies here from your workers. One of the things we know is that a lot of AI systems failed when they're deployed because we haven't thought through what IT actually means to put IT into a work force setting, what IT actually means to put IT into a risk management.
Yeah, I love that particularly the lens of IT clearly can give a fair mind of efficiency. But if that's the only lands, then what happens is you're missing the bigger opportunity. But I also think in that bigger opportunity of how is A I and human sort of grow the size of the pipe together.
I think A I is not gonna place an entire worker, but I will replace tasks. So now thirty percent of my tasks have been replaced. I have to do something with that thirty percent.
I've got no shorted things to do him. About thirty percent.
You can you can fire thirty percent of me. But if you have not just a coastlands and you have much more of a growth and productivity and profitability and innovation lens, then the sum of all the thirty percent can go into creating all those new opportunities.
I was wondering if there is any research or thoughts that the partnership has on what are some of these opportunities that are gonna create jobs versus replace jobs, right? Like because there's a lens here, which is we've got ta be ethical, we gotta make sure that harm is not done. And we also have to have a longer term perspective of.
Not just replacing workforce would like pure blind automation so that you sort of more like protecting. But I wonder if the partnership has any sort of points of view on that. You expand the art of the possible through creation of no rules and no job.
one hundred percent. I mean, I think the way you've just described IT, right? It's not a trade off. It's not like responsibility and safety are a counter wait to innovation. And you have to constantly be choosing between innovation and benefits.
We know that in order to be innovative, in order to be opening up new markets, in order to be thinking about new beneficial outcomes, you need to be thinking about how you are doing this safely and responsibly as well. The whole opportunity for a generative A I to become much Better because today is still has real chAllenges, whether it's hallucinations or other ways in which IT is deployed, just hasn't really gotten to where IT needs to be. But if we're starting to think about, once it's there, how IT can be deployed to really deal with some of the biggest global chAllenges of our time.
That's why i'm at the partnership on an eyes because I believe that AI does have that transformative potential to really support important breakthrough, whether it's in health care or really the big questions in front of us around our environment and about sustainability. Batori seeing this in the predictive AI world, right where we're starting to see IT just becoming integrated into the scientific process across all sorts of disciplines. I do think getting back to this question of, you know, the trade off between responsibility and innovation that one of the things that I hear from companies right now is they feel alone as they're trying to descent tangle the risks of deploying these technologies and the benefits to their productivity and the innovation and how they serve their customers as wow.
And so one of the reasons why I think the work that we do at pai is important, as I want to say, there is a community of organizations that are restless with exactly the same questions that are trying in real time to figure out what does that mean to deploy this responsibly in your workforce? What does that mean to think about safety of these systems and how they're Operating, whether that's auditing, overside or disclosure or otherwise? How do you experiment and what is best practice? And so I think more and more, if we can let companies and organizations know that there's a communities who are actively working on these questions where you can get some insights and really in real time, develop what will become best practice that that's a good thing for them to know.
Yeah sounds like education and collaboration and sort of sharing all this is key because absent that, it's really easy for many people, including many executives, to think of I as it's a tool is going to give me twelve percent productivity, going to need two percent less people. And let's figure how to do IT, which is a valid way of thinking if you're not exposed to everything else that happened. And this is really interesting. You mention that so many of the participants in in this partnership are actually big tech firms that are shaping the very thing we're .
talking about yeah. I mean, I think IT also goes to show that is all very new. So many of the questions that we're dealing with, whether are a large technology company or a small start up who wants to develop systems based on the release of these foundation models, are otherwise.
So many of these questions are being sorted out in real time. Now that doesn't mean that we shouldn't be looking to other sectors. And I know you both have experience and sectors where we should be able to learn Better about what does that mean to be safe and responsible as we deploy these A I systems as well. But I do think that even for policymakers, both keeping up to speed on the latest developments in th Epace o f t he t echnology, but also understanding the nuances of the technology is a tRicky time. And so places where we can learn together is really crucial.
Learn together makes a lot of sense because I think of your key missions is about collective good. And the idea of you just said that we learning as we go, and this is a technology that you can't look at the back of the book and see the right answers, but we tend to learn Better from things that don't go well. News media, they pick up examples of extreme wonders and extreme tears.
There is very little of this nuance. So you just mention, how do we elevate and how do you get you to share this nuance? And the good experiences and the bad experiences? How do you get them to learn for this collective good?
Yeah, that is just such a fundamental question. And of course, from your experience and from other sectors, we know that building a culture of safety means building a culture of transparency and disclosure.
right?
We were really happy a number of years ago to work with some researchers to develop an incident reporting mechanism, which is now thriving. And this you'll see if you're looking at any of the emerging frameworks around foundation model or frontier model, whether it's the g seven work or word coming out of the O E C D or elsewhere. This question of incident reporting is becoming very, very clear. And how do we create ways in which so that we Better understand, once these systems are deployed, what is actually happening and how can we all learn from them? So I think that's one piece of IT, which is going to be crucial .
when I think about like the last twenty five years that i've been in this field. It's all typically a one sided story from the perspective of the technology developer or proponent, which then forces a very negative sort of dialectic on the other side, from the group or organization of people are trying to rain IT back in, which gives this in a very policed way of dealing with this.
But we are at a precipice or inflection point where the speed with which we learn and adapt is really critical, which means we need to share things that aren't working. Just a couple of days ago, a youtube video came on my field that says how we messed up this customer order. And the whole point was about how the company screwed up and what mistakes they made and then how they correct IT. And like, that's all they do, like they basically post all these videos and how theyve messed up something and how they respond to IT. And they get a lot of use because they just literally celebrate their mistakes and learning from IT, which is not something you see often mabe.
I just started you to with all the groups are doing class.
we get a lot of use for.
you know, i've been thinking a lot about this question of openness lately because you will both know that there was a debate last year between open models and open source models and closed models in which we're safer. And so in the executive order, for example, of the last year, there was a real interest in finding out more about the open source and what are the marginal risks associated with those type of models being deployed.
But for me is this question of openness relates directly to this point I think you're making about sharing and how do we build a culture of open sharing into the AI ecosystem. And so I think first and foremost, s IT has to start in the classroom. IT has to start in the research community itself.
As you know, in the A I research community, we don't have the same uh culture of ensuring that everything is published openly. We have a lot of research that now happening behind closed doors. We don't have the same sort of journal and editorial and overnight perspective that we do in some of the other fields of science. We have a lot of things that that um are published via conferences or published directly to archive.
So what do we need to do? What does publicly funded research need to do at that very, very early stage in order to invent a culture of true scientific and openness and scrutiny? And then, yes, the next piece of IT has to be, what is the level of transparency and disclosure when these systems are being deployed out into the world? We but together, I set of guidances all the way from predestine, yet R N D to post deployment monitoring. What does that mean at twenty two, twenty two places along the development and deployment ecosystem to be consciously disclosing and attending to risks and ensuring that guard rails are in place? We think about the disclosures that are required in the financial services sector, for example, or elsewhere.
What does that mean to have that type of disclosure regime in place for some of these very, very large models? And then I think the last piece of IT is how are we making sure that we are bringing the public and citizens into this conversation about the win which these tools are developed and deployed? So what does openness mean when we think about a citizen engagement in this process as well being really part of the tech development process to ensure that their voices are heard about how they want the technology to work for them and not on them? So I do think as we sort of build our skills and capacity around the deployment of this technology, we need to be thinking about openness, sharing um and disclosure um in order to ensure that IT really does work for us.
You mention public funding, but I think that's part of the thing to know. That's one thing that interested in the the partnership in that so much of basic science used to come out of universities and used to come out of in a in of funding, but that's not really on your board. That's not the fine companies that started the partnership.
That's what seems particularly important that maybe the partnership has a role here that might not have existed with power types of technologies because the cost are staggered and universities are not really able to participate in the research budget of OpenAI has a we are closed to ten billion dollars of funding for microsoft, but the N. S. F.
Had a about a nine point nine billion dollar budget in twenty twenty three. So that one project IT would wipe out the N S F budget entirely. It's not clear exactly how that plays out because we talk about openness.
I think the communities been very open about sharing algorithms. You know, those algorithms are very widely dispersed. But openness really depends on data too.
I also feel like he's really openness on the deployment. And like where .
is being used as I want .
you mean how it's being used? Where is IT messing things up? You know it's beyond the algorithm, right? It's the data that gets fed into IT.
It's all the decisions that get made. Its the human role is just in general like some your coming around sharing and rebeca. Your point around transparency is really resonates with me because I just feel like.
Generally speaking, when IT comes to technology, it's sort of an alpha game of like, you know, this is the best one, is Better than the other one. It's accuracies more, it's errors less like every thing is always so great and is Better than everything else. But I do feel like there is something here. And maybe he's also more on the evolution of us as a society as to just just accepting and acknowledged that these systems make mistakes. And if you're not hearing about IT is because somebody he's hiding IT rather than it's not.
And so I think the more some of these big players begin to openly talk about these things so that they're not the only one, right? I think like if many of these larger players in this world really exhibit the same kind of openness that were talking about here, look, look, we made a mistake. This is how we fixing IT. This is what happened. I actually think that would fly a lot. I think that will increase a lot more the public trust and will allow a real dialogue, because I actually feel like this conversation is very much polarized at all levels, is very polar ized for those who understand A I and those who don't within those who do understand A I, very polar ized between the proponent and the folks against IT for your mong gering. Yeah, I feel like we need to break that polar ization and bring these two sides together.
So maybe just to shift that slightly record, we've been talking a lot to think about a western centric role. But so much of this is going to a fact. The globe, what does the partnership think about and how does the try to work with those outside of santa and the rest?
absolutely. And of course, we know today that many workers outside of the global north are very much consumers of the technology that is being developed in in the global north as well. So we need to make sure that they are part of the global governance conversation. I was really harden to see just the september around the U. N.
General assembly that there was a really big initiative to think about bringing many, many, many of the voices that are not around the table at the g seven or many of the other discussions, whether it's, you know, safety institutes that are being developed in the different countries, to bring those countries and those voices into the conversation, a boat, what is needed from a global governance perspective. We had the high level advisory body to the secretary general at the U. N.
Released their report, really starting to think about what does IT mean. There's a lot of work that we've been doing to try to hear from organizations across the globe. And how is A I currently working within your populations? How would you like IT to be working? And that's both from companies who are developing and startups who are developing technologies through to many academics who are developing all sorts of new technologies. Thinking about the way, for example, to develop data sets with languages that are not prevalent in those that are being developed out of the west. So really, I mean, so much interesting work that's happening and thinking about how to get those voices into this conversation is court of what we do, I P A I as well.
So now we have a segment. It's called five questions where i'll ask you a series of rapid fire questions. Just tell us the first thing that comes to your mind, what do you see as the biggest opportunity for A I right now?
The biggest opportunity for A I is for companies to start experimenting with IT to see how we can truly drive innovation and potential in the work that they're doing, start experimenting in a low risk, high value way as soon as possible. The more you use the technology, the more you understand what I can do for you and what I can't.
What is the biggest misconception .
about the eye that IT is a technical system that sits outside of humans and culture? I always say AI is not good and it's not bad, but it's also not neutral. IT is a product of the choices we make.
Very good. What was the first career you want? IT, what do you want to be?
I think when I was Youngest, I wanted to be a journalist. I wanted to tell the stories of people, societies, and i'm very happy. That is part of my job. I get to do that today.
When is there too much A I?
I think there's too much A I when we see people differing to A I systems are over trusting A I systems. You know, one of the things that we know is that we tend to over trust something that happens through computers or machines.
What is the one thing you wish A I could do right now that I cannot currently?
Oh my goodness, where do I get started?
You have to be.
You know, my favorite ite use of A I right now is my bird APP. I don't know if you've used the marlin bird APP, but it's a great APP for bird watchers, because you can take a very fuzzy picture of a bird, and I will give you a nice match in its system, or you can take a recording of a bird song, and I will tell you what the bird is. So I guess what I would love A I to do is to may help me see the birds, help me find the birds in the street.
Just like broken on go for birds.
I guess I would really love that i've been enjoying so much, learning more and more about birds, but I need define them.
Robot code been wonderful speaking with you. Thank you.
IT has been my pleasure entirely. Thanks so much for having me.
N Rebecca, I thought that has a really interesting perspective. This idea of we're all in the same boat, we're all developing this technology and none of us have any experience with IT, but IT affects all of this. That's a great perspective and IT makes me optimistic and worried at the same time.
Well, I agree with you. I also think that you had a very good point on importance of transparency and sharing. And sharing like the question you asked about, you know, how do we get people to share more about the things that don't work? We hear all the positive like I I actually think between the two points of, look, we're all in this together.
IT will help us all if we or transparent and collaborative about IT. And in many ways, I mean, A I has gotten to what IT is mainly because of the open source nature of IT, right? But when we say open, because we don't just mean let's share the algorithms, IT also means like in this new paradise they were entering, its also means share lessons from the employee ment and things that don't work.
And for me, the youtube thing I was referring to was really I because like usually when you see a tag line that's like how we screw up this customer's order, you're like, why does this have like so many views and you're looking for something chaotic and they literally are showing all the mistakes they made and who made those mistakes. And and IT doesn't matter the mistakes you make, IT matters how you correct them and what you doing correcting them. I actually think we've entered really a phase where we need to as a society and as a community of technologies and innovators, share a lot more about things that don't work and why they don't work. If we're truly gonna be open source, we should be open source about that too.
Yeah, that's a great expansion that hand back and exam to people. They skim m right over all the things that they did correctly, and they focus on the ones they did wrong because that's where we have the opportunity for learning and actually to make IT even sort of more machine learning example.
That's the fundament idea behind boosted trees and the booted algorithm that you pay a lot more attention to the places where your model makes an air, then where the bottle gets you right here. And I think we got another example here where we're gaining to a little more deaths behind what IT means to go beyond the platte ude. You know, we've got these ideas that no one disagrees with, of doing the right thing, don't do the wrong thing, that like germy concept vacuousness, we were talking with him, adapt.
Service should be adaptable. But how do you be adaptable? Well, today we were talking about open.
How do you be open? I don't think we are the right end of this. But it's an interesting thing, I think.
about we need to celebrate transparency around mistakes and how we correct them.
One thing I liked that he also say was about this idea of your technical. I think that a lot of people view this as a technical problem, which has a technical solution. For example, we have copyright and fringe's and so watermark use a technical answer for that.
yes. And you know, i'm not against water marking. I think there's a lot of potential for that, but it's not just going to be solved by a technical solution.
These are things that are Operating with within societies, within cultures. And that the thing that keeps coming through, we did report a couple of years ago about the intersection between culture and the technology of artificial intelligence. IT seems like there's a growing recognition that is more than just a new set of algorithms.
That's right. That's right. I mean, IT IT is when I think about any technology that's widely available to people, whether it's cards or whatever is IT is a question of responsibility of an individual too, right? But I think that the cultural aspect of this, which can start from within corporations, right?
I mean, we did talk about how A I makes individuals and then teams enhance organizations more proud and happy at set, right? I do really think that the point around the intersection of society culture with technology is going to be really, really key here, rather than just a bunch of regulations and a bunch of technology artifacts that that sort of correct mistakes or prevent mistakes, right? It's how we choose to use something right when I get behind the vehicle IT doesn't matter whether there is a speed limit. I mean that doesn't matter, but like he doesn't matter whether I am being followed or there is a cop or whatever, there is something about being responsible and it's it's ingrained .
in us as yeah that's like.
look, you you can't be reckless because you can endanger your own life. Other people's life is is something about that, uh, who we've now begun to accept as a society that just just goes with being responsible with the tool and robeco said, IT very icicle, it's not good or bad. IT just is but and is how you use IT is going to make a difference.
If I step back and I think back of some some folks, we've had we've had mozilla, for example, we've had the partner ship on A I ve had mst international. And these are not some of the traditional companies using artificial intelligence. And you know, I think some of these organisms may fail, that some of their initials might not end up working well.
But I think the fact that they're trying and they are pushing towards these things can help that collective good even if they don't quite reached the goals that they said for themselves, even if they come close. I think it's an important part, and i'm glad we've had some these on here to share that. Thanks for listening.
Our next episode as a bonus episode where I speak with oxford, carl frey and link dance care in canberra at a recent conference on jobs in the age of artificial intelligence. Also, if you have a question force about A I strategy implementation, use and favorite favor of ice cream or anything else, we have an email address S M R dash podcast. M I T E D D U will include that email on the show notes into your name, where you're from and what question you have. And we'll dedicated episode to hearing some of those questions the best we can come up with answers for them. thanks.
Thanks for listening to me myself in A I. We believe, like you, that the conversation about A I implementation doesn't start and stop with this podcast. That's why we've created a group on link dance specifically for listeners like you. It's called A I for leaders, and if you join us, you can chat with show creators and host, ask your own questions, share your insights and gain access to valuable resources about A I implementation from M I T S M R and B C G. You can access IT by visiting M I T S M R dot com forward slash A I for leaders will put that link in the shower notes, and we hope to see you there.