This year, AI is everywhere. Making artwork. Replying like a human in real-time chat. Creating long-form text documents based on specific prompts. And it’s not just consumer use. AI has been part of the toolkit of professionals, especially in legal and accounting, for many years. But generative AI has stepped onto the scene—think ChatGPT and Bing’s AI integrated into its search engine.
Many of the things generative AI is doing right now sound a lot like what you do as a professional: getting a specific set of facts and figures, referencing underlying issues and regulations, and drafting documents. But many questions about AI still remain: is it trustworthy? Is it ethical to use ChatGPT? What data sets are these chatbots trained on?
The companion articles covered basic artificial intelligence and machine learning terms – what machine learning is, how data is structured, how we’ve all been using AI for years. Now, let’s get into the transformative details of recent developments in generative AI.
What is generative AI?
Generative AI is a subfield of artificial intelligence that focuses on creating data from scratch, such as natural language, audio, music, or images. Generative AI models use a variety of AI techniques, particularly neural networks, probabilistic modeling, and deep learning algorithms to create an output. Generative AI can be used to automate mundane tasks, spark creativity, and create custom versions of products based on customer needs.
What you need to know about generative AI
Generative AI has the potential to revolutionize how businesses operate by increasing efficiency, speeding up creation of complex content, and providing customers with more personalized experiences. ChatGPT is the most well-known consumer facing form of generative AI, and forward-thinking businesses are establishing best practices on how their staff should use it.
What is a chatbot?
A chatbot is a computer program that uses natural language processing to simulate human conversation to solve queries. You’ve most likely encountered one while shopping with an online retailer. A sidebar pops up to let the customer know they can ask common questions. If the question is too complicated for the chatbot, the program will connect a live representative with the customer.
Lately, chatbots based on Large Language Models, like a GPT, have been appearing as a standalone tool (ChatGPT), attached to a search engine (Bing), or associated with one (Bard with Google). The goal of these chatbots is to answer any kind of question, not just one related to a specific shopping experience.
What is a GPT?
GPT stands for Generative Pre-Trained Transformer. It refers to a specific model architecture developed by OpenAI.
- Generative means this AI algorithm generates new outputs based on the data it used for training.
- Pre-Trained means just that: it’s already trained.
- Transformer means it’s a particular kind of AI data model, invented by Google in 2017.
GPT is a specific instance of a large language model, and LLM is a more general term that encompasses various large language models developed by different researchers and organizations, like BERT (Bidirectional Encoder Representations from Transformers) by Google.
What you need to know about GPTs
The most famous Generative Pre-Trained Transformer is ChatGPT; you may have asked it to write an email – or, more fancifully, a Star Wars movie by Charles Dickens. But although text generating GPTs have been some of the most well-known, a GPT can be based on other large datasets and can output in other formats. For example, other GPT-based consumer websites generate “painted” artwork (e.g. DALL-E 2), audio that sounds like a celebrity, and even full-motion video based on user input.
The potential uses for this technology for professionals are immense, but it’s a rapidly changing landscape, and most of what’s public is very much consumer-focused.
What is a prompt?
LLMs and GPTs start from user input, and that specific input is called a prompt. A user types in a natural-language sentence or paragraph, such as “What is the law governing marketing activities in California?” or “what was the deductibility limit for charitable contributions in 2010?” and the AI system will output its response.
What you need to know about prompts
Prompts are how you, or anyone, interact with a generative model. More specific and detailed prompts typically yield output more in line with user expectations. Any professional already familiar with the importance of putting things in exactly the right language will have an advantage in creating and evaluating prompts.
Some commentators have claimed that many professions will reduce to being “prompt engineers.” But human insight, context, and experience will always be needed – to craft the right query to get the right answer, to evaluate the response for hallucinations or biases, and to interpret and act based on that output.
Is AI trustworthy?
Artificial intelligence isn’t 100% trustworthy, because it can make mistakes—just like humans, who aren’t 100% trustworthy either. Transparency in AI data sets and language models is key to developing a working level of trust. Company stakeholders — such as developers, users, and practitioners — should have confidence in the quality of the data that AI models use and how they make decisions. They should also be ready to test AI outputs against their own expertise and knowledge, particularly in early implementations.
AI making headlines in 2023
OpenAI, the company behind ChatGPT, released GPT-4 on March 14, 2023. ChatGPT users can access the newest version with a paid subscription, otherwise, free users still use GPT-3.5. However, Microsoft’s Bing Chat (which is now available to everyone) uses GPT-4 in its chatbot. Some promised upgrades with GPT-4 include:
- Accepting images as input alongside text
- Longer text processing (up to 32,000 tokens, which is approximately 52 pages)
- More factual responses (60% fewer hallucinations)
Not everybody is excited about the release of new and improved GPTs, though. Several tech industry leaders signed an open letter in late March of this year suggesting a pause to all giant AI experiments larger than GPT-4 for six months. The letter maintains that “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks manageable.” Steve Wozniak (Co-founder of Apple), Elon Musk, Eric Schimdt (Former CEO of Google), and Tom Gruber (helped designed Siri), are just a few of the high-profile names that signed the letter from the Future of Life Institute.
What should I expect from AI in the future?
AI technology is entering a new era, with a massive number of new consumer options arising from the creation of the GPT model several years ago. One year from today, the AI landscape will likely look very different than it does today, and different in other ways a year after that.
What won’t change is the need for human expertise and insight, both on the AI-creation side and the AI-user side. AI solutions for professionals will rapidly change, and something you would consider impossible today might be part of your daily work by 2025.
That means, now more than ever, you’ll want to rely on trusted expertise, rather than falling down the rabbit hole of “good-enough” consumer technology. Look to see how AI models were trained, what technology they’re using, how common “hallucinations” are in their output, and, above all, their track record. Look for vendors that have been implementing AI solutions for decades and that have established globally-renowned AI research centers as the partners you’ll want to join in your continuing AI journey. Visit our hub on artificial intelligence for more insights about AI for professionals.