What a pleasurable read. It's marvelous thing to see AI being embraced by an educator. By doing so, you have equipped your students for what's to come in the next 10-20 years. Bravo! 👏
I think where we disagree is how rapidly the CX of LLMs will change. While learning the current interface is fun and useful in getting value out of the current rev, these deficits are the first to be addressed. So there is huge value in learning "pong" when "xbox" is around the corner but it's more about how the tech evolves and changes and less about retaining "pong playing" skills.
Legal issues - from an MBA perspective, this cuts to the core of "how do I make money". Unless this is simply "fun with tech", IP is sort of important in today's tech landscape. As this is very unsettled law that will likely strike at the core of some AI business models, it is sort of important. Also see the EU's recent ruling about Facebook's business model and Garland's google case for how legal issues may destroy the some of the most valuable companies on the planet.
Sustainability - from an MBA perspective I thought the current trend was sustainability is in almost every discussion. The ROI of these types of solutions factors very much into "how valuable is this tool". Would hope that most MBA students understand that Amazon/Microsoft/Google data centers aren't free. Green AI is already a thing.
Ethics - from an MBA and tech strategy perspective, ethics are at the core of how we use new disruptive technologies. The US has been less than stellar in its use of new tech (see privacy, surveillance capitalism) and omitting a robust discussion about ethics seems to be a lost opportunity to guide young minds.
While I love this, you are living through V1 tech that will evolve super quickly and likely specialize - so students should look to understand the concepts (why) it works this way as opposed to spending a lot of time on the NLP interface....
Also please tell me that you're also including a big slug of
"ethical use of AI" - what if we ask it to help us do "bad things?"
"sustainability" - creates greenhouse gasses to produce a lot of worthless results
"legal/IP issues" - Stable Diffusion could be sued into the ground by getty and if training data is IP then scraping my website (copyright) could be illegal (ahem ChatGPT...).
There is a decent chance that the advice being given will still be relevant. If the AI were a true general intelligence at the level of a human: you would still need to guide it as to the specifics of what you want from them like you might coach a human assistant. Yes: they'll eventually become more personalized and have a user model to be able to guess more at the particular focus of the person using it, e.g. that they are a business person writing a serious document and not someone writing a fluff piece for a lifestyle blog. However that may take longer than people expect: and in that case since people are multi-faceted (like perhaps they started up a lifestyle blog on the side) they'll still then need to know how to prompt the system to behave differently.
It seems like it also imparts a useful mindset that may transfer to other domains. it helps people grasp the concept of the "curse of knowledge": they can't assume the AI knows anything and therefore will need to learn to be specific about things they take for granted in terms of being very specific about how to guide it.
re: ""ethical use of AI"
Seriously? Should he also waste time on "what if people use paper and a printing press to spread bad ideas? What if people use computers to do unethical things? What if they use an airplane to do a bad thing like fly it into a skyscraper? It seems like that is more a topic for some class generally focused on ethics in general. Its unclear why it needs to be dragged into other classes.
re: "legal/IP issues"
Thats the only serious concern, though most of the liability is likely on the part of the AI companies themselves, and it also is essentially the same as the issue of work product from say an outside consulting firm. Essentially using the results of AI is like consulting a human that somehow read or saw the works of myriad other humans and hands you some result based on that. How close that result is to existing work might potentially be a concern.
re: "sustainability". Again, seriously? For one thing, if the topic were to be broached in an MBA curriculum, it would seem appropriate for a course focused on that which examines tradeoffs. In this case the tradeoffs seem questionable to waste time on in a course that isn't focused on the topic.
How much time are people the age of typical MBA students spending on video games that don't lead to productive "results" or other endeavors like vacations. If they are more efficient due to the use of AI: they may spend less time using a computer. If they become more productive in helping businesses operate more efficiently and those businesses waste less resources and energy creating producing products they are more sustainable. If they do their jobs better they are likely to realize its cost effective to make businesses more sustainable.
Yup: AI is computing resource intensive, but it needs to be kept in perspective in terms of its ability indirectly to reduce the use of resources elsewhere.
Re: “...It seems like that is more a topic for some class generally focused on ethics in general. Its unclear why it needs to be dragged into other classes. “. If “Ethical Behavior in All Possible Domains” were a required class in every program, your concern may be valid. But the risks of putting a dangerous weapon in the hands of a student otherwise unaware of the consequences, social and personal, of its use is negligence on the part of the teacher. IMHO.
The term “weapon” was probably too dramatic. It assumes intentionality. I’m thinking more in terms of causing harm out of ignorance of the potential risks involved. And you are right in saying that problematic information can be gathered from many places. The properly trained user will know when and how it is arriving and what to do (or not do) with it.
There are specific lessons on discriminating between the right and wrong (ethical and unethical) uses of weapons, medical practices, money (particularly other peoples money), technology, legal advice, etc. etc. Medical malpractice is a class in itself.
I agree that the broader values of protecting human life, do unto others... don’t belong in a MBA class. Ideally, the admissions process filters out those that lack them. The question is how do these apply when faced with a new set of tools. How can an otherwise well meaning person unwittingly find themselves going down an ethically questionable path. It would be useful to be able to think through, in concrete terms rather than abstract concepts, the possible downstream effects or unintended consequences of ones use of these tools in a public or business environment.
These are people that are using AI: not people creating it.
They can get problematic information or misinformation from the internet or a classmate right now or a colleague in their future job. They can misuse information from any source. Its unclear how this is somehow a "dangerous weapon" that differs in kind in a way that it needs to be specially treated. What is the difference that makes a difference from other information? If there needs to be something taught regarding the "ethics of information which may be problematic or flawed", then that seems to be a general need.
People fear what they don't understand and that seems to be whats going on with this. Either that or its part of the desire to find excuses to push "social justice" into every class and meeting under the sun, and in this case embed it within the term "ethics". Unfortunately those who push such things don't tend to grasp there are differing conceptions of "ethics" and "social justice" and that not everyone is obsessed with the particular issues they are and see no need to embed it everywhere.
My point is that courses have a particular focus. Should a class on typing each about the ethical uses of words? There is more than enough for a whole course on AI and society by itself. Its necessary to decide whats appropriate for inclusion in each class that isn't specifically on a topic but merely using something as a tool. It seems like these days some people with a narrow focused interest seem to assume everyone needs to always hear about that everywhere all the time.
LOL, my point was they were worth mentioning as they are dynamics of most disruptive tech to strategically consider. Never said to make them a focus of the class but worth some time from an innovation, valuation, and GTM perspective. But I guess you're taking a more "vocational" approach to the tech.
I really appreciate you sharing these detailed practical tips directly from your classroom. I believe the secret to improving education worldwide is to get good, solid lecturers collaborating and sharing what their practices and ideas. Its clear how much you care about education, how creative you are and how passionate you are about embracing change and making sure the curriculum remains relevant.
Your writing style is also open, honest, transparent and without and unnecessary 'zhooshing'. I really appreciate this! I've read a number of your posts now and will be using and referencing some of your examples when I speak at the AI in Higher Education Conference in South Africa later this year.
Definitely one of the leading educators when it comes to integrating AI, keeping the curriculum relevant and really understanding what learning is about! Very much appreciate you. Thank you.
Hey! aiarchives.org can assist with point no. 3 on your A.I. Policy.
A.I. Archives saves the prompt along with the chatGPT response in a shareable URL that allows the information to be retrievable by other readers and creates citations.
An interesting and informative case study. Thanks for sharing your experience of teaching students some AI literacy - something that sorely needed for everyone. I also cant help thinking MBA students at a decently ranked university are not exactly representative of the majority of students, especially undergraduates at run of the mill institutions or students in schools. The generation heading off to preps and year one of primary school this year are those who will go through their entire formal education and graduate into a world awash with this technology. They do need more guidance and we will probably learn from them too. You probably know of this already, but for the benefit of others, a free resource for learning promoting is https://learnprompting.org/
Thank you for these suggestions on how to embrace the use of AI to MBA teaching. I am wondering how do you grade these assignments though? I am guessing they all get an A in the end? Would appreciate your advice on grading.
Very interesting work. Well done! For info, our team of professors (University of Mons, Belgium) are using the same AI to design an architecture project with students from a workshop for digital architecture. the AI is integrated in the design process before materializing the work on a real field. Overall, we have informed our students of the same risks. However, we use Chat GPT3.5 to write texts for us in semantics recognised specifically for... mid journey. In this way we are using the best of both worlds, communicating our chat-to-drawing questions to each other. We will keep you posted on the outcome at the end of June. best.
I really like the idea of embracing the change. Instead of forbid the tools, lets learn how to use the tool - also a great opportunity to practice critical thinking, how to check facts etc.
There are 2 BIG takeaways I get from this. First, this was an effective "how-to" guide for using AI tools well, one that I plan to use extensively in my own personal marketing efforts. The second, from a parent perspective, is the need to embrace AI as a ubiquitous part of the world in which our children are developing. This is important, because resisting it is paramount to creating a barrier between the parent and child, whereas learning to use it with them creates another avenue for sharing life together.
It is my understanding that you are teaching MBA students. Those students should already know a lot of high level concepts such as sources validation, the importance of nuance, etc.
Your article underlines this pretty clearly: they were able to avoid pitfalls, to correct the AI or to be critic of the results thanks to their previous knowledge.
Now, that recoups my view of AI, and most tools actually: it mostly help people that know how to do things without them, because they understand the underlying mechanics of writing a good essay, or doing a substraction.
I'm afraid that if you give AI tools to young people, too soon, they would not understand all that and that the effect could be devastating.
Knowingly or unknowingly, you've just taught the world's first prompt engineering course. Congrats!
What a pleasurable read. It's marvelous thing to see AI being embraced by an educator. By doing so, you have equipped your students for what's to come in the next 10-20 years. Bravo! 👏
I think where we disagree is how rapidly the CX of LLMs will change. While learning the current interface is fun and useful in getting value out of the current rev, these deficits are the first to be addressed. So there is huge value in learning "pong" when "xbox" is around the corner but it's more about how the tech evolves and changes and less about retaining "pong playing" skills.
Legal issues - from an MBA perspective, this cuts to the core of "how do I make money". Unless this is simply "fun with tech", IP is sort of important in today's tech landscape. As this is very unsettled law that will likely strike at the core of some AI business models, it is sort of important. Also see the EU's recent ruling about Facebook's business model and Garland's google case for how legal issues may destroy the some of the most valuable companies on the planet.
Sustainability - from an MBA perspective I thought the current trend was sustainability is in almost every discussion. The ROI of these types of solutions factors very much into "how valuable is this tool". Would hope that most MBA students understand that Amazon/Microsoft/Google data centers aren't free. Green AI is already a thing.
Ethics - from an MBA and tech strategy perspective, ethics are at the core of how we use new disruptive technologies. The US has been less than stellar in its use of new tech (see privacy, surveillance capitalism) and omitting a robust discussion about ethics seems to be a lost opportunity to guide young minds.
While I love this, you are living through V1 tech that will evolve super quickly and likely specialize - so students should look to understand the concepts (why) it works this way as opposed to spending a lot of time on the NLP interface....
Also please tell me that you're also including a big slug of
"ethical use of AI" - what if we ask it to help us do "bad things?"
"sustainability" - creates greenhouse gasses to produce a lot of worthless results
"legal/IP issues" - Stable Diffusion could be sued into the ground by getty and if training data is IP then scraping my website (copyright) could be illegal (ahem ChatGPT...).
re: "will evolve"
There is a decent chance that the advice being given will still be relevant. If the AI were a true general intelligence at the level of a human: you would still need to guide it as to the specifics of what you want from them like you might coach a human assistant. Yes: they'll eventually become more personalized and have a user model to be able to guess more at the particular focus of the person using it, e.g. that they are a business person writing a serious document and not someone writing a fluff piece for a lifestyle blog. However that may take longer than people expect: and in that case since people are multi-faceted (like perhaps they started up a lifestyle blog on the side) they'll still then need to know how to prompt the system to behave differently.
It seems like it also imparts a useful mindset that may transfer to other domains. it helps people grasp the concept of the "curse of knowledge": they can't assume the AI knows anything and therefore will need to learn to be specific about things they take for granted in terms of being very specific about how to guide it.
re: ""ethical use of AI"
Seriously? Should he also waste time on "what if people use paper and a printing press to spread bad ideas? What if people use computers to do unethical things? What if they use an airplane to do a bad thing like fly it into a skyscraper? It seems like that is more a topic for some class generally focused on ethics in general. Its unclear why it needs to be dragged into other classes.
re: "legal/IP issues"
Thats the only serious concern, though most of the liability is likely on the part of the AI companies themselves, and it also is essentially the same as the issue of work product from say an outside consulting firm. Essentially using the results of AI is like consulting a human that somehow read or saw the works of myriad other humans and hands you some result based on that. How close that result is to existing work might potentially be a concern.
re: "sustainability". Again, seriously? For one thing, if the topic were to be broached in an MBA curriculum, it would seem appropriate for a course focused on that which examines tradeoffs. In this case the tradeoffs seem questionable to waste time on in a course that isn't focused on the topic.
How much time are people the age of typical MBA students spending on video games that don't lead to productive "results" or other endeavors like vacations. If they are more efficient due to the use of AI: they may spend less time using a computer. If they become more productive in helping businesses operate more efficiently and those businesses waste less resources and energy creating producing products they are more sustainable. If they do their jobs better they are likely to realize its cost effective to make businesses more sustainable.
Yup: AI is computing resource intensive, but it needs to be kept in perspective in terms of its ability indirectly to reduce the use of resources elsewhere.
Re: “...It seems like that is more a topic for some class generally focused on ethics in general. Its unclear why it needs to be dragged into other classes. “. If “Ethical Behavior in All Possible Domains” were a required class in every program, your concern may be valid. But the risks of putting a dangerous weapon in the hands of a student otherwise unaware of the consequences, social and personal, of its use is negligence on the part of the teacher. IMHO.
The term “weapon” was probably too dramatic. It assumes intentionality. I’m thinking more in terms of causing harm out of ignorance of the potential risks involved. And you are right in saying that problematic information can be gathered from many places. The properly trained user will know when and how it is arriving and what to do (or not do) with it.
There are specific lessons on discriminating between the right and wrong (ethical and unethical) uses of weapons, medical practices, money (particularly other peoples money), technology, legal advice, etc. etc. Medical malpractice is a class in itself.
I agree that the broader values of protecting human life, do unto others... don’t belong in a MBA class. Ideally, the admissions process filters out those that lack them. The question is how do these apply when faced with a new set of tools. How can an otherwise well meaning person unwittingly find themselves going down an ethically questionable path. It would be useful to be able to think through, in concrete terms rather than abstract concepts, the possible downstream effects or unintended consequences of ones use of these tools in a public or business environment.
Thanks for your thoughts, -jgp
These are people that are using AI: not people creating it.
They can get problematic information or misinformation from the internet or a classmate right now or a colleague in their future job. They can misuse information from any source. Its unclear how this is somehow a "dangerous weapon" that differs in kind in a way that it needs to be specially treated. What is the difference that makes a difference from other information? If there needs to be something taught regarding the "ethics of information which may be problematic or flawed", then that seems to be a general need.
People fear what they don't understand and that seems to be whats going on with this. Either that or its part of the desire to find excuses to push "social justice" into every class and meeting under the sun, and in this case embed it within the term "ethics". Unfortunately those who push such things don't tend to grasp there are differing conceptions of "ethics" and "social justice" and that not everyone is obsessed with the particular issues they are and see no need to embed it everywhere.
These are people who are using it, will be exposed to it in their business career, and may create startups/tools/apps using it.
You seem to be arguing that we shouldn't be teaching "social responsibility" to MBA students.
Is that your point?
My point is that courses have a particular focus. Should a class on typing each about the ethical uses of words? There is more than enough for a whole course on AI and society by itself. Its necessary to decide whats appropriate for inclusion in each class that isn't specifically on a topic but merely using something as a tool. It seems like these days some people with a narrow focused interest seem to assume everyone needs to always hear about that everywhere all the time.
LOL, my point was they were worth mentioning as they are dynamics of most disruptive tech to strategically consider. Never said to make them a focus of the class but worth some time from an innovation, valuation, and GTM perspective. But I guess you're taking a more "vocational" approach to the tech.
I really appreciate you sharing these detailed practical tips directly from your classroom. I believe the secret to improving education worldwide is to get good, solid lecturers collaborating and sharing what their practices and ideas. Its clear how much you care about education, how creative you are and how passionate you are about embracing change and making sure the curriculum remains relevant.
Your writing style is also open, honest, transparent and without and unnecessary 'zhooshing'. I really appreciate this! I've read a number of your posts now and will be using and referencing some of your examples when I speak at the AI in Higher Education Conference in South Africa later this year.
Definitely one of the leading educators when it comes to integrating AI, keeping the curriculum relevant and really understanding what learning is about! Very much appreciate you. Thank you.
Hi. I am doing an audit of policies on AI use by top universities. I noticed a striking similarity with your note and that of UChicago Prof. Gregory Bunch. See this: https://instructionaldesign.chicagobooth.edu/2023/03/20/artificial-intelligence-ai-tools-chat-gpt/
I'd appreciate if you'd reach out to me in my school email josotto@ceu.edu.ph. I am a research faculty from the Philippines.
Hey! aiarchives.org can assist with point no. 3 on your A.I. Policy.
A.I. Archives saves the prompt along with the chatGPT response in a shareable URL that allows the information to be retrievable by other readers and creates citations.
What I can say is that these AI have helped traders in development of trading strategies making some research
An interesting and informative case study. Thanks for sharing your experience of teaching students some AI literacy - something that sorely needed for everyone. I also cant help thinking MBA students at a decently ranked university are not exactly representative of the majority of students, especially undergraduates at run of the mill institutions or students in schools. The generation heading off to preps and year one of primary school this year are those who will go through their entire formal education and graduate into a world awash with this technology. They do need more guidance and we will probably learn from them too. You probably know of this already, but for the benefit of others, a free resource for learning promoting is https://learnprompting.org/
Thank you for these suggestions on how to embrace the use of AI to MBA teaching. I am wondering how do you grade these assignments though? I am guessing they all get an A in the end? Would appreciate your advice on grading.
I was interesting! Thank you.
Thank you for sharing your teaching AI journey. Looking forward to hearing about what happens next.
Very interesting work. Well done! For info, our team of professors (University of Mons, Belgium) are using the same AI to design an architecture project with students from a workshop for digital architecture. the AI is integrated in the design process before materializing the work on a real field. Overall, we have informed our students of the same risks. However, we use Chat GPT3.5 to write texts for us in semantics recognised specifically for... mid journey. In this way we are using the best of both worlds, communicating our chat-to-drawing questions to each other. We will keep you posted on the outcome at the end of June. best.
I really like the idea of embracing the change. Instead of forbid the tools, lets learn how to use the tool - also a great opportunity to practice critical thinking, how to check facts etc.
There are 2 BIG takeaways I get from this. First, this was an effective "how-to" guide for using AI tools well, one that I plan to use extensively in my own personal marketing efforts. The second, from a parent perspective, is the need to embrace AI as a ubiquitous part of the world in which our children are developing. This is important, because resisting it is paramount to creating a barrier between the parent and child, whereas learning to use it with them creates another avenue for sharing life together.
Hello!
It is my understanding that you are teaching MBA students. Those students should already know a lot of high level concepts such as sources validation, the importance of nuance, etc.
Your article underlines this pretty clearly: they were able to avoid pitfalls, to correct the AI or to be critic of the results thanks to their previous knowledge.
Now, that recoups my view of AI, and most tools actually: it mostly help people that know how to do things without them, because they understand the underlying mechanics of writing a good essay, or doing a substraction.
I'm afraid that if you give AI tools to young people, too soon, they would not understand all that and that the effect could be devastating.
What are you thought on this?