GoalSpan Logo
Oct 3, 2023
play_arrow pause

69. AI and Leadership

69. AI and Leadership
On this episode, Jeff welcomes Rob Seamans, PhD, an Associate Professor at the NYU Stern School of Business. Rob is also the director of the Center for the Future of Management, and was previously the Senior Economist for technology and innovation on President Obama's Council of Economic Advisers. Rob’s research focuses on how firms use technology in their strategic interactions with each other, and the economic consequences of AI, robotics, and other advanced technologies. Rob and Jeff explore the rapid adoption of AI, its economic implications, and the challenges it poses to organizations. Rob shares key findings on AI's impact, discusses cultural aspects, and addresses productivity gains, industry benefits, and how to foster inclusive AI-driven cultures. Listen in to discover strategies for addressing AI-related job-security concerns, reskilling the workforce, and ensuring ethical AI implementation with a focus on responsible design and policymaker involvement.

Transcript

Intro: Duration: (02:03)

Opening music jingle & sound effects

Jeff Hunt:

This human capital podcast is brought to you by GoalSpan, a performance management app that helps you set goals, get real time feedback, run reviews, and align your workforce around what's most important. With Goalspan, you can integrate with all your favorite HR and payroll apps. To learn more, go to goalspan.com.

Welcome to the Human Capital Podcast. I'm your host, Jeff Hunt. When I think about new technologies at work, it seems that they typically start out with a small number of people in a few departments, but as the value of the technology is demonstrated, eventually everyone in the organization finds a way to use it.

Artificial intelligence and specifically generative AI is currently experiencing this kind of adoption curve. But the pace of adoption is accelerating like nothing we've seen in the last decade, which is why governments and businesses alike are struggling to keep up. To give you an example, ChatGPT reached 100 million new users within its first two months.

Today we're going to talk about the macro and microeconomic trends that are driving the adoption of AI. We will talk about how leaders can utilize AI without compromising the most valuable asset they have, their people. Today I am welcoming to the podcast an expert in this field, Rob Seamans. Rob is Associate Professor at the NYU Stern School of Business, where he is the Director of the Center for the Future of Management.

Rob's research focuses on how firms use technology in their strategic interactions with each other and the economic consequences of AI, robotics, and other advanced technologies. Rob was previously appointed as the Senior Economist for Technology and Innovation on President Obama's Council of Economic Advisors. Welcome to the podcast, Rob!

Rob Seamans:

Thank you very much for having me, Jeff.

Topic 1. Who or what inspired you in your career? (02:04)

Jeff Hunt:

It's great to have you on the show. I've been excited to talk about the topic of AI for a while. And so, we have a lot of pent up demand by people trying to understand What the implications of this technology are. So I'm excited to get into our conversation, but before we jump in, give me a thumbnail of your career journey. And also, if you can share, if there was anyone who inspired you along the way.

Rob Seamans:

So, let me start at the end and then I'll jump to the beginning. I'm a professor at New York University's Stern School of Business. So I'm a business school professor. I'm an economist by training. I did not set out sort of rewinding multiple decades.

I won't tell you how many, but I didn't set out to become a business school professor. When I was in college, I was an, I was an English major studying English literature. And, and this was Reed college. It's a college in the Pacific Northwest. As an English literature major, you had to do a minor in a foreign language or the equivalent of a minor in foreign language.

And so I did my minor in Chinese. And I realized that in order to really fully understand the language, I needed to actually spend a school year abroad in China. And so I did, this was in 1991, 1992, sorry, sorry, 1993, 1994. I did a school year abroad in China. And at that point in time, China was just growing dramatically.

And the year that I spent there was, I mean, it just, it sort of blew my mind how rapidly the economy was growing and changing. And over the course of the year, I got to see how, you know, sort of the power of an economic system in terms of changing people's lives for the better on average. Um, and so I, when I came back to read, you know, I was so far along with my English major that I, you know, completed it, but I tried to supplement that with as many classes in economics as I could.

I had never taken a course in economics before. I graduated, I worked for a couple of years, I went then back to graduate school and I did an MBA and then a master's in economic and realized that, you know, really what I wanted to be doing was to be studying economic type questions from, you know, in terms of at the firm level.

So, you know, so how are certain technologies, let's say, affecting the economics of a firm or the economics of a region and things like that. And so I went to graduate school to pursue that, graduated in 2009 and then ended up here at. New York university's Stern school of business. So, sorry. So, so that, that was probably more than you bargained for Jeff, but you asked me and it was not a, a straightforward trajectory and hence, you know, it was a little bit of a mouthful there.

Now, let me come to your question. Uh, somebody who has inspired me. So first of all, many, many people have inspired me over the course of, of my life, over the course of my career. But I do want to highlight one person, which was, uh, one of my English literature professors. She, I found her incredibly inspiring because she was really tough and I was so nervous to take this very tough class that she offered.

Her name was Gail Sherman, uh, an English literature professor. At Reed college, her specialty was Jeffrey Chaucer. Everybody said, sort of warned me how tough of a professor she was. And I told myself, okay, I'm going to challenge myself. I'm going to take this class. I took it. I loved it. I loved working with her and I ended up doing my senior thesis.

Every student at Reed has to do a senior thesis. I did my senior thesis on Geoffrey Chaucer in, in particular, a character called the Wife of Bath, that's part of the Canterbury Tales. Anyway, that has nothing to do with my current career, but it taught me, but it taught me a really interesting lesson, which is to not shy away from things that seem challenging because it could be those things that actually in many cases are most inspiring.

And so in that specific instance. I thought about shying away from taking that class from her, but in fact, it actually led to a whole bunch of super interesting things that I did with her as part of my senior thesis. So, so I'm glad I didn't shy away from it, and then the life lesson from then on, for, for me at least, has been, again, not to sort of shy away from stuff that feels like it might be challenging, because it could be a great opportunity.

Jeff Hunt:

I love that story, and it's such a great example of how these people in our lives that push us to stretch. Often are the ones that help shape us into who we are today. And we never, we almost never regret pushing ourselves. And stretching ourselves. I love your story about the year abroad in China, Rob, because that had to be a stretch.

Was there like a memory that you were taking away from that, that was a powerful memory from that time abroad?

Rob Seamans:

The one that I think is most closely tied to what I was describing earlier was this person, and I, I'm ashamed to say, I can't remember his name and we have not stayed in touch, but, but it was a person who lived in the neighborhood right near the school that, that I was attending.

Who operated what's called a, a jianbing cart. So these are these small little favorite pancakes that get made. So that was his business. That's how I got to know him. I was a frequent customer of these jianbing. He made enough money from that to be able to purchase a taxi cab. And then over sort of the second half of the year, he was driving the taxi cab around, ended up earning enough money from that to purchase a condo for his mother and a nearby housing, sort of housing area.

And so just seeing the. Dramatic changes in his life over the course of not even a year was over the course of 10 months just for me was eye opening. So that's an anecdote.

Topic 2. Key findings on AI and use of robots in the US and abroad. (07:38)

Jeff Hunt:

I can see how that would inspire your interest in economics. Just that one little example with this guy and Rob, you've done so much research on the use of robotics and AI both in the US and other countries. I'd love to start our conversation with what some of your key findings are.

Rob Seamans:

Again, Jeff, thanks for inviting me onto the show. I, and I'm really looking forward to digging in on all of this. There's many different findings that one could highlight. Let's start, and you know, there of course are differences between robots and AI.

And I think we'll, of course, we'll sort of get into that. But let me start with robots. One of the recent findings that I find fascinating that this comes from some research using data from the U. S. Census Bureau. It turns out that the distribution of robots within the U. S. Is highly, highly skewed. So there are some areas that have many more robots.

Then you might expect, sorry, I say some areas, some plant in some areas have many more robots than you might expect. Now it turns out that that's even conditioning on the, the industries in that area. So imagine we're focused only on manufacturing plants in the automotive sector, there are some places where you see lots of adoption and use of robots in that sector, and then other geographies where you see hardly any adoption at all.

And it seems like the presence of specialized human capital matters a lot in terms of explaining that pattern. So, so, so that, that's something that I find pretty interesting because I think it suggests that at least when it comes to robots, there are going to be some areas where you see a whole lot of adoption, other areas where you don't see a lot of adoption.

And then to accept there might be, that there might be positive or negative spillovers from the adoption of these robots. It, it feels like that's going to be concentrated in certain areas, uh, instead of widespread across the U. S. Anecdotally, I, I've started to dig into some data on adoption and use of AI at the establishment level, and it looks again, like there's some of these same patterns where, uh, there, there's sort of highly skewed adoption, there, there's some areas, again, conditional, even conditional on industry.

So, we're looking at the exact same industry, but there's some areas where you see a whole lot of adoption of AI and then other areas where you see very little. Adoption. I don't exactly yet know why that is, but again, I, I think probably the presence of things like specialized human capital and other complimentary assets will be factors that matter a lot in terms of explaining that pattern.

Jeff Hunt:

That makes sense. And the other thing too is on this show, we talk a lot about corporate culture, but there's also ethnic culture among different countries. And when you think about the use of robotics and AI in different countries and, and the U. S. What are, have you seen some differences? I know you've done some research in Japan, but what have you found there as well?

Rob Seamans:

Yeah. Uh, so, so I haven't done research in Japan. I have done some research in China, uh, research in the U. S. Um, I, I know of other folks that have looked at the adoption and use of robots in Japan. Since you mentioned Japan, let's talk about that first. What's interesting is that the adoption and use of robots does vary a lot by countries.

So, there are some countries like Japan. That for demographic reasons really try to encourage the adoption of robots. And so there's a great study that was done by some researchers at Stanford on adoption and use of robots in nursing homes in Japan. And so it turns out that there are some prefectures in Japan that subsidize the adoption of robots in nursing homes.

And the reason for that is that working at a nursing home is pretty difficult work for a human. It involves a lot of lifting of heavy human bodies. You could throw your back out and things like that when you're trying to get somebody out of a bed or into a tub and, and things like that. Um, and so there are certain robotic assistive devices that either the, the health aid could wear or that they could operate that help with the lifting and moving of the human.

And it's not that the human is very heavy or something, but it's heavy enough that again, you could throw your back out just by lifting these, these folks up. And so because of that sort of physically taxing nature of the job, it turns out it's pretty hard to find people to work in these environments unless, unless there's a robot there to help.

And so what they found is that lots of nursing homes do like to adopt robots and use them for these purposes, and it becomes easier to then hire people. When the nursing home has a robot. So again, that's a setting where we see subsidies being used to try to encourage the adoption. And a lot of that has to do with the demographics in Japan, where you have this very sort of a larger aging population relative to the younger working population.

Jeff Hunt:

I love that, that sort of reference point, because so often, at least in the United States. Employees are concerned with the adoption of robots because they are looking at that as potentially replacing their jobs. And so it's almost like a paradigm shift to understand with the implementation of robots. It can be a supplement to the job rather than a replacement of the job. Is that correct?

Rob Seamans:

Yes. So robots and other technologies certainly could be and can augment humans with their job as opposed to replacing humans. Now, now that being said, it, it does seem like this fear of, of replacement is something that lots of people in the UF experience some of the research.

And again, this isn't research done by me. This is a research by, uh, some folks at university of pitch, excuse me, university of Pittsburgh. His name is Oseya Juntela, if I'm pronouncing his name correctly, uh, who by the way, just, uh, recently yesterday published this fantastic Brookings report, um, on some of this research.

So he's done some research looking at areas in the U. S. where there's a lot of exposure to robots. So where there's lots of firms that have adopted robots. And, uh, what he and coauthors find is two things, physical health improve. Okay. Which you'd expect it. You know, I gave you that, uh, story about the robots helping at nursing homes.

You know, physical health comes improved in, in, in these areas. People are hurt less often. They're, you know, they're not, they're, they don't. Um, they're not coughing as often. They don't have asthmatic issues as often. Some of that, by the way, has to do with robots being used in what typically are called sort of dirty.

[00:13:52] Type of tasks. And so now we can have a robot do that instead of a human pulled muscles and things like that. So all of that, you see a decrease in those types of issues. So overall physical health improves. However, it also turns out in the U S that in those areas where you see a lot of adoption of robots, that mental health issues increase.

And so, yeah. And so it seems that human workers in the U S really do fear. Robots and, or for that matter, sort of new technologies now in contrast. So, so again, this research team, they also do the same thing, but looking at data from Germany, what they find is again, you know, an increase in improvement in physical health outcomes, but they don't find any effect on mental health outcomes. And so they speculate that part of the reason for that is that the social safety net is just very different in Germany than it is in the U S and so workers in Germany are, are, are just. Not as worried about losing their job to a robot or, or other type of technology. And so I think what that points to is something that you were talking about earlier, you know, thinking about some of the differences across firms in terms of reasons why a robot or other type of new technology might be adopted.

In some cases that might be adopted to augment the work that a human is doing, but in other cases might be adopted to substitute for the work that a human does.

Topic 3. Enhanced productivity and the AI-driven evolution. (15:07)

Jeff Hunt:

And you mentioned this earlier that there are different adoption rates, especially on AI, among different industries, or maybe even business functions, but can you comment more on that?

Rob Seamans:

Yeah, so, so other folks have looked at this and there are very clear patterns that That emerged from this, where you have certain industries like I. T. and finance, where, excuse me, where you see relatively high levels of, of adoption. And then certain other industries like construction where there's relatively low levels of adoption.

One of the surprising ones is healthcare. So there's some very nice research that's been done by Abhi Goldfarb. He's a professor at University of Toronto. Loretta, uh, Teodoritis, who's at University of Southern California. And then Bledy Tosca, he's the chief economist of a firm called Burning Glass that actually provides this data.

And what they've, uh, what, what they find from analyzing this data is that the healthcare sector in the U S is, I think it's, uh, according to their data, sort of the, the, the second lowest adopter, lowest is construction industry, second lowest is healthcare. And it's interesting because there is so much data that healthcare providers collect on you and me and, and everybody else.

And so it feels like the type of industry that would be right for a whole lot of use of AI, right? You could digitize a lot of digital rec, you know, a lot of the records and, and things like that. Uh, potentially there are lots of productivity gains that could be had. And yet there's very little adoption.

Of AI in that sector and they point to some potential reasons why, and some of it might, might have to do with regulation and some of that varies across state in the U S as well as, you know, just overall federal regulation. There's also different requirements with, uh, insurance providers. There, there's also just the presence of these very old legacy systems and, and things like that.

So in any case, uh, healthcare is one of these sectors where it's sort of surprising. I would have expected much more adoption, but it turns out there's very little adoption.

Jeff Hunt:

Now, when you look at productivity increases, do you have any research on how much productivity is increasing by the adoption of AI tools?

Rob Seamans:

At a macro level, the short answer is no. Economists care a lot about this and are eager to see it in the data, and it's not yet there in the data. You can look at macro level studies, you know, so macro level studies have been done of adoption of robots, and you do see an effect on economic growth, it looks like.

On average, again, this is sort of at the national level, it looks like robots have probably contributed about 10 percent to economic growth over the prior couple of decades. And to put that into perspective, it would be, you know, maybe moving from, you know, again, on average across the countries in, in this sample, you know, maybe moving from sort of 3 percent to, you know, 3.

3% in terms of GDP growth or something like that. Uh, so one would hope that, uh, eventually we would see something like that for AI, but it's way too early to, to see something like that in the data. Now that, that being said, there are a number of recent papers. And by recent, I mean, you know, the last six to 12 months that have come out that have been looking in particular at generative AI and looking at very specific uses of generative AI inside firms.

And they do find productivity effects. These are productivity effects that are very specific to the actual setting, to the actual. Implementation. And so, so just to give you a sense of this, one of the studies was looking at something called GitHub Copilot. So GitHub is a sort of a repository for software code that, that a lot of people use and a lot.

So over the past decade or so, a lot of people have put a lot of software code into this online repository. What GitHub has done is they have analyzed all of that software code to try to predict patterns in coding. So if I, so if me as a coder, if I start to code something. Hub can sort of look at the entire corpus of code that they have to get a good sense of what code might come next based on what I've been coding up.

And so they developed a tool around this called GitHub Copilot. Folks can now use it if they want. And the study that they did on this found that software developers or software coders that rely on GitHub Copilot experience an increase in productivity. I forget exactly what they estimated in terms of the point value, let's say, of the increase in productivity, but it was a substantial increase in productivity.

Now, one of the things that's interesting is that most of those productivity gains, uh, came from the, the less experienced coders, that the more experienced coders had a little bit of a bump to their productivity, but it's really the less experienced coders that experienced a sizable bump. So it's sort of bringing them up almost to being on par with the more experienced coders.

And so I think what that study does is that. It hints at how potentially game changing some of these new generative AI technologies can be. You might think about a worker shifting from one occupation to another, or from one job function to another. And you worry about what, you know, boy, it's going to take them years to get up to speed in terms of what it is that they need to do in this new job function.

However, perhaps with the help of generative AI, they can more quickly move up that learning curve. And so I, I find that exciting and hopefully points to a whole lot of productivity gains that we will get from AI. But to your original question, no, we don't yet see that in the macro data, but hopefully we will.

Topic 4. AI in the HR performance management space. (20:31)

Jeff Hunt:

Yeah, that's fascinating and brings up a whole topic around learning and development. So you mentioned the less experienced coder increasing their productivity from the ways that these generative models are assisting them running a company in the HR performance management or tech space. It seems like one of the greatest opportunities is for these AI generative models to help with learning and development and all different capacities within an organization as well. Do you agree with that?

Rob Seamans:

Yes, absolutely. I completely agree with that. I think that the one sort of slightly tricky thing is. Just because you find that AI or generative AI can help in one instant within a company within one function within a company, taking it and sort of spreading it to the rest of the company is not necessarily an easy task, right?

I mean, it might be a completely different algorithm that you need that's trained off of completely different data and things like that. And then, you know, let alone moving from one company to another company that might. Have slightly different data and maybe slightly different processes, you know, sort of, uh, internal business processes that matter for them, you know, relative to a different company.

And so while we do see examples of AI really helping in certain very specific settings, it is a little bit harder to sort of generalize across an entire company or, or across an entire industry.

Topic 5. Strategies for addressing job security concerns. (21:55)

Jeff Hunt:

Now, I know that with the rapid deployment of AI, it's true. It's been amazing. And I mentioned this in the introduction of this episode. It feels like it's faster than any technology we've seen in a very, very long time. With that comes a lot of unknowns. There's, there are unknowns for business leaders and executives and those in the C suite that are trying to figure out how to adopt these technologies in the right way and not have them be a distraction, but have them be able to actually add value to the ways that they help their customers and their employees.

And there are concerns for employees that, as we mentioned earlier, may be thinking about job security. You know, one example is... You look at the Hollywood writer's strike, well, one of the elements that they're concerned about is that a generative AI may actually replace some of those jobs. And so I know that you're doing a lot of research in the AI and you're studying this space, Rob, but I'd love to have you just put all that aside and share any comments or thoughts that you might have for people that have concerns about this, both in the leadership capacity and in the, maybe in the employee capacity.

Rob Seamans:

Okay, so that, that, that's a big task that you've given me. But, but, but that's okay. I give out tough homework to my students and I expect them to deliver. So I, I should, it's, it's okay for me to be on the receiving end, I guess. So I, three points. One, from the firm point of view, firms absolutely need to be leaning into this moment.

There's no question that what AI and in particular, what generative AI can do. Is, you know, it's, it's, there's been sort of a kink in terms of, at least in terms of what, what we see that it can do. I mean, there's sort of been a dramatic, uh, improvement and sort of increase in, in performance. And it looks like the technology can be very, very helpful, at least in certain settings.

And so as a business leader, you should be experimenting with this new technology and trying to think in a little bit of like a brainstormy type way about different ways that you could be using the technology that's from the firm point of view. So from a worker or employee point of view, I think, I think it's the same type of thing. I mean, AI generative AI, these large language models are going to change the way that we do work. I think it will mostly augment the way that we do work much in the way that a computer augments the way that we work much in the way that the internet augments the way that we work. We do work. I think that AI and, and large language models, generative AI.

Also, we'll augment for the most part, we'll augment the work that we do rather than replace the work that we do. But that being said, I think that there's a lot of opportunity here. So, I think there's a, it's not clear for anybody's given job or for any occupation or for any industry, exactly the way in which these new technologies will be used.

And so I think it's a, I think it's an opportunity. For workers to try to figure that out, right? Try to figure out ways in which you can use these new technologies to make your job easier, to make your boss's job easier and ways in which the technology really doesn't work very well. And I think it's the, those employees, those workers that sort of lean into this moment and become as skillful with this new technology as they can.

I think it's those workers that are going to be. That, that will do well and sort of benefit from this new technology. So that, that was sort of the, the, the second bucket. And then, and then there was sort of a third bucket of comment that I wanted to make that was sort of linked, that sort of mentioned a little bit about when it came to the firms and when it came to the workers and that's that.

You know, we really are in the middle of something right now, and we don't know what things are going to look like in the future, right? Nobody knows exactly how large language models are going to affect, say, the education industry or how large language models are going to affect the insurance industry.

There are lots of people that are, that express a lot of confidence about how they think these things are going to play out, but they don't know, right? That people are making predictions and people make predictions with maybe more confidence or less confidence. So I think that there is a lot that's up in the air, a lot that's sort of uncertain.

And so again, I think that that, from my point of view, that suggests that there's a lot of opportunity. And I think I would really try to highlight that, especially for anybody who's worried about the technology, instead of being worried about it, think about how many opportunities it potentially can provide you.

Jeff Hunt:

As you're sharing that, I'm just reflecting on the importance of communication in the midst of these rapid changes. So, for instance, if you're an employee and you're really trying to understand how can I leverage, uh, generative AI technology to improve the value that I bring my employer. The first step in doing that is actually to experiment, but also to talk to your manager.

And if you are a leader of an organization, if you're in the C suite, the things that you're going to want to focus on are strategically, what sort of decisions are we going to make around implementing AI tools in our organization? And maybe that begins with strategic planning. It should really be a large element of Our strategic planning process.

How is this actually going to change the way that we deliver value to our customers, that we can be differentiated as a company, how is it going to change the way that we can actually improve our culture and improve our productivity internally, so. We can either be proactive or reactive in our approach. So it sounds like what you're saying is being proactive and thinking these things through and experimenting and communicating is really going to be the pathway to leveraging them to the fullest.

Rob Seamans:

Yeah, Jeff, I completely agree with that. And I think you're making a great point as well about the strategic direction of a company. And you mentioned specifically about the C suite folks in the C suite, which I think is right, but what, I guess what I would also love to see would be imagine sort of a strategic working group around how a firm could be using generative AI that doesn't just include folks from the C suite, but includes folks from all up and down the hierarchy within the organization.

Because when you think about, who best knows the job that they're currently doing. It's actually the folks that are in that job and they might have tons of insight about ways in which this new technology can use. And so you really want to think about ways to empower those folks to experiment with the technology and to share what they find with you.

I think it's those organizations that are going to, uh, be sort of early adopters, fast adopters, and will benefit the most.

Topic 6. Responsible AI design and the role of policymakers. (28:24)

Jeff Hunt:

It's a great piece of advice. I'm wanting to shift and talk a little bit about the ethical considerations surrounding AI. And I mentioned in the intro that governments are struggling with how to keep up with this because it's just evolving so fast. I'm curious about anything that you've learned about legislation that is now being implemented either at the local or international level. Anything that you can share about that?

Rob Seamans:

So I, this is a really important area and it's an area that I'm ashamed to say, I don't know nearly enough about. I mean, so, so I do a lot of research on AI.

I do a lot of research on firms adopting AI. In the past, I've done policy work. I like to write on policy, especially policies around new technologies like AI. Ethics are a big piece of this. The biggest problem that I have is there is so much going on right now. There, there's so many different policies that are being considered at the city level, the state level, the federal level, and that that's just within one country.

Jeff Hunt:

Let alone, you know, differences across countries. And it raises the question or the importance for the consumer in the business and the individual to have a really clear understanding of how companies are building AI systems. Are they designing them responsibly? Are they considering things like potential bias and privacy concerns?

I know that in my company at Goalspan, we've implemented our first release of, of AI tools that actually do an incredibly fantastic job around synthesizing large amounts of performance data by the individual. And coming up with just a few key points for the manager around things like employee strengths and development areas and questions to ask the employee when you're in a one to one meeting and you're, you're doing things like that.

But we had to take a tremendous effort to make sure that we were thinking through all of these ethical considerations and then communicate with our customers in a way that was very transparent so that they knew exactly what they're getting into. So I guess. I'm curious if you have any thoughts or comments about how people should make decisions around leveraging and implementing some of these tools.

Rob Seamans:

Yeah, I do. But first, Jeff, I'm going to, I'm going to, so that was super interesting to hear. And so I'm going to throw a question back at you. Sure. So it sounds like you went through, you and your firm went through a careful process thinking about, uh, what ethical considerations are going to matter for you when you put this product together.

And so I could also imagine that there are maybe some potential customers that Are turned off by them. That's right. So that's a, that's a business decision that you made that focusing on the ethics in this case are going to be more important, at least in the long run to you than any specific, uh, line of business from a customer.

Jeff Hunt:

Absolutely. In fact, what we've done is provided the administrators of our product, the ability to literally opt out or turn off the entire AI feature. So we let them know what it is, how it impacts them and the value that it delivers, and then they can make a decision as to whether to utilize the tool or not in an intelligent way. Whereas other technologies don't always do that. It's integrated into what they're offering. Without giving the consumer the choice of whether or not to utilize it.

Rob Seamans:

It's interesting. And your customer opt in, do you get an additional revenue stream?

Jeff Hunt:

Initially, we structured our products to charge more for this module, but we've now decided that we're going to have it as an additional feature that we're not going to charge for, because we believe in the long run that the adoption of these tools is going to become so integrated, it's going to be difficult for us to justify charging anything.

And we've also found that the way we're implementing these technologies, even though there is an expensive cost to the initial development, the actual overall cost is very low relative to the user fees that our customers are paying. So we really just want to figure out a way to add value and, and the most.

Simple and clean way without trying to contaminate the pricing structure, if you will. Rob Seamans:

So, uh, coming back to the, uh, point about ethic. Yes, some of the research that I've, uh, been doing recently is looking at startups in the AI space. And the extent to which they are engaging with ethical AI principles as a side note, when you look at all the largest firms, you know, like the Microsofts and those types of firms, they all have a set of ethical principles when it comes to the use of AI and things like that.

There is a little bit of debate about how much of that might be window dressing and things like that, but nevertheless, they do have those principles there. And so. We thought it'd be interesting to look at startups. And so by startups what we mean are companies that are less than 10 years old, that, that are venture backed, have gotten some, uh, venture, uh, backing and, and that are using AI in the development or sale of a product.

This, by the way, this is together with Michael Mink, who's a professor at acee, a business school in France, and Jim Bethan, who's a professor at Boston University. And we've together, we've been running a survey of startups over the past five years. We sort of, every year we sort of survey these startups and we have started to ask about the use of ethical principles.

So it turns out that about 60 percent of the startups in our sample have a list of ethical principles of some sort. We actually thought it would be higher because it's, it struck us as pretty easy to have some ethical principles, whether you abide by them or not. It's sort of the bigger question, but just having some ethical principles, we thought everybody would say yes, but it's about 60, you know, 60 percent or so say that they do.

Now, what's interesting is that when we, uh, then ask firms, you know, we don't ask, do you abide by them? Cause we figured whoever has them would say, yes, we abide by them. So instead we try to ask questions that get at the extent to which these firms might abide by their principles. And so we ask things like, you know, have you, have you ever turned down business because it conflicts with your ethical AI principles?

Have you ever spent extra money to build a bigger, to purchase additional data, to make your training data more representative? So questions like that, that tried to get at either costs or foregone revenues, things that are sort of costly to the firm in terms of maybe less profit, at least in the short run.

And we find that, you know, a, a percentage, a substantial percentage of the folks that say that they have these ethical AI principles. In fact, it does look like that they abide by them. But here's, what's interesting is the ones that, you know, the firms that really look like they abide by these principles, these startups that abide by the principles, it's those firms that end up being able to raise more venture capital financing within a year or two, whereas the ones that just have the window dressing type of ethical AI principles, and it doesn't look like they necessarily abide by it.

It looks like they end up a couple of years down the road having a harder time raising additional follow on venture capital financing. And so to us, it sort of suggests that while in the short run, there might be some cost to you in the longer run, having these principles and abiding by them can be, it can enhance the performance of a firm. So we found that interesting.

Jeff Hunt:

It's a great example because, and I can totally understand why they would raise more capital because The venture, the, the investors are doing everything they can to mitigate risk. And so if they're actually following these principles, it's more likely that they're going to be mitigating long term risk, um, short and long term risk. Isn't it? Rob Seamans:

So I definitely think that that, uh, part of it, I think it also, not only does it mitigate risk, but I think it sort of signals to the venture capitalists that this is a firm that believes it's going to be around for a long time. Sure, right. Because they don't worry about turning down this line of business or that.

It's not going to matter five to 10 years from now when we're making billions and billions of dollars, like really what matters now is we have a certain belief about the world and how to approach it, we've tried to encapsulate those in our ethical AI principles and you know what, we back that up as an investor, that would give me some confidence in investing in a firm.

Topic 7. Lighting round questions. (36:33)

Jeff Hunt:

It makes perfect sense. Okay, Rob, as we begin to wrap up, I've got some simple lightning round questions to throw at you. So can we shift into that?

Rob Seamans:

Yes, I'm ready.

Jeff Hunt:

The first one is what are you most grateful for?

Rob Seamans:

I'm grateful for this job that I have. I love being a professor.

Jeff Hunt:

What is the most difficult leadership lesson that you've learned over your career?

Rob Seamans:

And is it supposed to be lightning round? Um, you know, okay. So the, the answer is that you never really, you never quite know, at least in my job, when it is that you'll be thrust into a leadership position. You know, part of the reason why I got into the job of being a professor is cause I didn't want to be a manager at a firm.

I wanted to have the 10, 000 foot view of things, but it turns out that as you advance in your career as a professor, suddenly you get thrust into these leadership positions without fully realizing it.

Jeff Hunt:

Who's one person you would interview if you could, living or not? Bill Gates. What's your top book recommendation?

Rob Seamans:

You know, I mentioned Chaucer's Canterbury Tales earlier. I don't know if that would be my top recommendation, but I think that's one to, one to consider. One that folks should have on their list.

Jeff Hunt:

What's the best piece of advice you've ever received?

Rob Seamans:

I've received lots and lots of advice. The best piece of advice that I've ever received? Was from an elementary school teacher who came up to me after I totally bombed at a school assembly, trying to ad lib an announcement. I was on the student council. I was trying to ad lib an announcement to the, the class. I totally bombed. And he came up to me and he said, when you're going to be, when you're going to do public presentations, you have to prep ahead of time.

Seems straightforward. I just, I forget how old I was, you know, I was like 10 or 11 at the time. I'd never thought of that. That was probably the single biggest, you know, best piece of advice I've ever gotten.

Jeff Hunt:

You've brought so much wisdom to the podcast today. And if you had to synthesize one or two key takeaways for our listeners, what would you want to leave people with?

Rob Seamans:

We touched a little bit on the, uh, substitution versus augmentation. So I want to come back to that. I think that there is, you know, there's lots of pieces in the news, you know, lots of articles in the news that really try to highlight this, you know, is the technology going to substitute for us or is it going to augment what we do?

And a lot of that is actually tends to be tilted more towards the substitution. And it seems like the sort of scare stories sell, there certainly will be a little bit of that. But I really think there's going to be a lot more augmentation than there's going to be substitution of human work.

And again, I just think back to the examples of the personal computer or the internet. These are tools that we use in our jobs. AI is a tool like these. We will be using, we will be using it. Our jobs are going to change, but most jobs will still be around. It's just that, you know, the nature of the work in those jobs will have changed.

Jeff Hunt:

Rob, I loved our conversation today. Thanks so much for coming on the podcast.

Rob Seamans:

Thank you, Jeff. I really appreciate it.


Outro(39:38)

Closing music jingle/sound effects

Jeff Hunt:

Thanks for listening to Human Capital. If you like this show, please tell your friends and also take the time to go rate and review us. Human Capital is a production of Goalspan, your integrated source for performance management. Now go out and be the inspiration to other humans. And thank you for being humankind.

Human Capital — 69. AI and Leadership
replay15 play_circle_filled pause_circle_filled replay15
volume_up
shareSHARE
rss_feedSUBSCRIBE