Connect with us

AI

The essential small business guide to generative AI

Published

on

Get up to 30%* off! Get going with GoDaddy!

Boost your business

Generative AI (Artificial Intelligence) isn’t new, but the recent explosion of AI chatbots, AI image-generation tools and AI-driven applications leaves many small business owners in new and unfamiliar territory. What is generative AI? And how should or could entrepreneurs use generative AI for a small business?

While people are continuing to experiment with what AI is capable of versus what grabs the headlines, there are already some simple ways in which generative AI can help small business owners. Our guide aims to help small business owners learn more about generative AI and how it can help save small businesses time and money.

What is generative AI?

Generative AI refers to a type of artificial intelligence that is capable of creating new content autonomously by learning from existing data. This can include generating text, images, music, or even design concepts by leveraging advanced machine-learning algorithms, such as deep learning and neural networks.

OpenAI’s ChatGPT and DALL-E are dominating the news, but new chatbot and AI art generation tools are being rolled out on a near-daily basis. From Google’s Bard to Meta’s BlenderBot, large tech companies are rolling out increasingly sophisticated generative AI tools.

It’s easy to start feeling overwhelmed, but it’s important to have develop a high-level understanding of generative AI and what it can do (along with what it can’t do).

Leveraging AI technology is crucial for small businesses to stay competitive in today’s rapidly-evolving global market. AI can help small business owners improve their efficiency and productivity by writing copy for websites or blogs, automating tasks, streamlining decision-making using data, and improving the overall customer experience.

Interested in learning more about how to use generative AI for small businesses? Let’s take a closer look at some examples that you can apply to your business.

Related: AI prompts for small business owners

A quick word of caution

This guide is meant as a general overview of how generative AI can help small business owners save time, and often money when running their businesses. We strongly recommend closely reviewing the output of the AI tool that you intend to use, as AI can return incorrect, false or outdated information or may include content containing third parties’ intellectual property.

Additionally, be aware that generative AI tools may save information that a user enters, so avoid entering any commercially sensitive or proprietary information in your prompts (the questions or tasks you ask an AI tool to help with).

5 ways to use generative AI for small business

From getting started with business ideas to optimizing your website content, here are 5 examples of how generative AI can help.

1. Generate creative and unique business names

The biggest barrier to getting started is sometimes a blank screen. Generative AI is great for helping to get your creative juices flowing. So if you’re stuck with writers block, or if thinking of a catchy business name isn’t your strong suit, consider using AI to kick-start the process.

AI can generate a large number of potential business names in a short amount of time, giving entrepreneurs a list of unique and creative names that they might not have come up with otherwise.

Screenshot of ChatGPT with the prompt asking for 20 interesting food company name ideas.

And, if you’re not completely happy with the recommendations, you can iteratively improve the generated names simply by refreshing your prompt and telling the AI chatbot what you’d prefer to see, ensuring the final name selection is well-suited for your business.

Back to top.

2. Automate content creation

Artificial Intelligence has the potential to revolutionize the way small business owners generate content for their business, By simplifying the content creation process and enhancing the effectiveness of published materials, such as website content, newsletters or blogs, AI can save entrepreneurs both time and money.

Using advanced natural language processing algorithms and deep learning techniques, AI-powered content-generation tools are able to analyze existing content within a specific industry or niche. Using that information, AI tools can then generate relevant and engaging content. In addition, you can tailor the output to match the overall vibe of your business.

Does your small business tend to be more light-hearted and fun? Add adjectives to help the AI tool generate content that fits your brand voice.

Additionally, AI can automate the process of content scheduling and distribution across various channels, allowing small business owners to reach their audience with consistent and timely communication.

ChatGPT responses to the prompt requesting an editorial calendar for the month of May.

ChatGPT responses to the prompt requesting an editorial calendar for the month of May.

By streamlining the content creation process and reducing the time and effort required, AI enables small business owners to focus on other critical aspects of their business, ultimately helping to drive growth and success in the competitive marketplace.

Back to top.

3. Enhance customer service

Another area where AI can be a powerful tool for small business owners is to enhance customer communication. With AI, business owners can quickly craft personalized responses, such as thank-you emails to customers after they make a purchase or sign-up for a service, creating a sense of appreciation and helping to foster customer loyalty.

ChatGPT response to a prompt requesting a thank you note for a customer

ChatGPT response to a prompt requesting a thank you note for a customer

AI can also streamline the writing process for creating follow-up messages, such as reminders for upcoming appointments or subscription renewals, making it easier to maintain a strong connection with clients.

ChatGPT response to AI prompt asking for an appointment reminder message.

ChatGPT response to AI prompt asking for an appointment reminder message.

Customers aren’t always happy with the service they’ve received. Another task that AI can help with is responding to customer inquiries and complaints by analyzing the content of messages and generating customizable responses that address the specific issue at hand.

Customer service interactions can quickly eat into a small business owner’s day. However, AI-powered chatbots and virtual assistants can handle multiple customer interactions simultaneously for you, significantly reducing the response time and allowing customer service representatives to focus on more complex tasks. This results in a more cost-effective customer service operation.

By providing fast and personalized responses to customers, AI-powered tools can enhance the overall customer experience, leading to higher satisfaction rates and a stronger brand reputation, all while freeing up time for business owners.

Back to top.

4. Support for social media management

Social media is a necessary part of owning a small business, but managing multiple platforms and attempting to brainstorm creative new content can feel daunting. AI can help out here as well.

Here are a handful of tasks that AI is able to help with:

  • Brainstorm creative captions for image-based posts
  • Create editorial calendars based on local holidays or celebrations
  • Generate conversation starters for a social media audience
  • Write simple video scripts
  • List creative content ideas
  • Craft ad copy to grab people’s attention

While this list isn’t exhaustive, AI-based tools are a great way to get the creative process rolling, especially on days when your creativity feels like it’s in a rut.

AI is also helpful with identifying key moments and relevant events for a target audience. Events would include local celebrations such as festivals and parades, as well as industry-specific holidays. This can provide businesses with an opportunity to create content that connects with their audience on a more personal level.

Another AI advantage is the ability to help small business owners maintain a consistent posting schedule, which ensures that businesses remain regularly active and visible on social media platforms. Consistent activity on social media platforms keeps the audience engaged and establishes the business as reliable and authoritative.

ChatGPT response to a prompt asking for an editorial calendar for Facebook in May

ChatGPT response to a prompt asking for an editorial calendar for Facebook in May

The benefits that AI brings to the table for a small business owner ultimately contribute to increased brand visibility and a stronger connection with the target audience, making AI an invaluable tool for businesses looking to thrive in today’s digital age.

Back to top.

5. Optimize content for SEO

For small businesses looking to drive traffic to their business, SEO is still king. And while SEO is definitely a skill, AI is able to give SEO novices the boost they need to get found online.

Keyword research is important for SEO, and AI is able to help business owners identify relevant and high-performing keywords in their industry.

In addition to helping business owners brainstorm keywords, AI can help create copy for a website. AI is able to generate SEO-friendly content that incorporates target keywords seamlessly while still providing value to the reader. This helps improve the website’s search engine ranking and increases the likelihood of attracting and retaining visitors.

AI is also able to generate optimized meta tags and descriptions for web pages, ensuring that they accurately reflect the content and include relevant keywords. These elements play a significant role in how search engines index and rank web pages, which influences their visibility in search results.

Another factor for SEO is website update frequency, and AI is able to assist here by suggesting content for blog posts or articles on a business website.

And, if you need help coming up with a first draft for your blog post, AI can help with generating keyword-rich text, ensuring that the content is optimized for search engine ranking, which increases the visibility of a small business online. You can then just make tweaks so it’s authentic to your business.

Back to top.

Summing it all up

Hopefully, we’ve given some helpful insights into how this new and evolving technology can help small business owners make the most of this new technology, and hopefully save save time and money. Just remember, it is crucial to check and review the output of AI tools and always avoid using sensitive information, leveraging generative AI to streamline tasks and improve the customer experience can give small businesses a competitive edge.



Get Hosting for $1.00*/mo with GoDaddy!

This post was originally published on this site

Continue Reading

AI

How to Train Generative AI Using Your Company’s Data

Published

on

Many companies are experimenting with ChatGPT and other large language or image models. They have generally found them to be astounding in terms of their ability to express complex ideas in articulate language. However, most users realize that these systems are primarily trained on internet-based information and can’t respond to prompts or questions regarding proprietary content or knowledge.

Leveraging a company’s propriety knowledge is critical to its ability to compete and innovate, especially in today’s volatile environment. Organizational Innovation is fueled through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how. However, knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings. As such, a company’s comprehensive knowledge is often unaccounted for and difficult to organize and deploy where needed in an effective or efficient way.

Emerging technologies in the form of large language and image generative AI models offer new opportunities for knowledge management, thereby enhancing company performance, learning, and innovation capabilities. For example, in a study conducted in a Fortune 500 provider of business process software, a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers. The system also expedited the learning and skill development of novice agents.

Like that company, a growing number of organizations are attempting to leverage the language processing skills and general reasoning abilities of large language models (LLMs) to capture and provide broad internal (or customer) access to their own intellectual capital. They are using it for such purposes as informing their customer-facing employees on company policy and product/service recommendations, solving customer service problems, or capturing employees’ knowledge before they depart the organization.

These objectives were also present during the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task. Today, however, generative AI is rekindling the possibility of capturing and disseminating important knowledge throughout an organization and beyond its walls. As one manager using generative AI for this purpose put it, “I feel like a jetpack just came into my life.” Despite current advances, some of the same factors that made knowledge management difficult in the past are still present.

The Technology for Generative AI-Based Knowledge Management

The technology to incorporate an organization’s specific domain knowledge into an LLM is evolving rapidly. At the moment there are three primary approaches to incorporating proprietary content into a generative model.

Training an LLM from Scratch

One approach is to create and train one’s own domain-specific model from scratch. That’s not a common approach, since it requires a massive amount of high-quality data to train a large language model, and most companies simply don’t have it. It also requires access to considerable computing power and well-trained data science talent.

One company that has employed this approach is Bloomberg, which recently announced that it had created BloombergGPT for finance-specific content and a natural-language interface with its data terminal. Bloomberg has over 40 years’ worth of financial data, news, and documents, which it combined with a large volume of text from financial filings and internet data. In total, Bloomberg’s data scientists employed 700 tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time. Few companies have those resources available.

Fine-Tuning an Existing LLM

A second approach is to “fine-tune” train an existing LLM to add specific domain content to a system that is already trained on general knowledge and language-based interaction. This approach involves adjusting some parameters of a base model, and typically requires substantially less data — usually only hundreds or thousands of documents, rather than millions or billions — and less computing time than creating a new model from scratch.

Google, for example, used fine-tune training on its Med-PaLM2 (second version) model for medical knowledge. The research project started with Google’s general PaLM2 LLM and retrained it on carefully curated medical knowledge from a variety of public medical datasets. The model was able to answer 85% of U.S. medical licensing exam questions — almost 20% better than the first version of the system. Despite this rapid progress, when tested on such criteria as scientific factuality, precision, medical consensus, reasoning, bias and harm, and evaluated by human experts from multiple countries, the development team felt that the system still needed substantial improvement before being adopted for clinical practice.

The fine-tuning approach has some constraints, however. Although requiring much less computing power and time than training an LLM, it can still be expensive to train, which was not a problem for Google but would be for many other companies. It requires considerable data science expertise; the scientific paper for the Google project, for example, had 31 co-authors. Some data scientists argue that it is best suited not to adding new content, but rather to adding new content formats and styles (such as chat or writing like William Shakespeare). Additionally, some LLM vendors (for example, OpenAI) do not allow fine-tuning on their latest LLMs, such as GPT-4.

Prompt-tuning an Existing LLM

Perhaps the most common approach to customizing the content of an LLM for non-cloud vendor companies is to tune it through prompts. With this approach, the original model is kept frozen, and is modified through prompts in the context window that contain domain-specific knowledge. After prompt tuning, the model can answer questions related to that knowledge. This approach is the most computationally efficient of the three, and it does not require a vast amount of data to be trained on a new content domain.

Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge. The goal was to provide the company’s financial advisors with accurate and easily accessible knowledge on key issues they encounter in their roles advising clients. The prompt-trained system is operated in a private cloud that is only accessible to Morgan Stanley employees.

While this is perhaps the easiest of the three approaches for an organization to adopt, it is not without technical challenges. When using unstructured data like text as input to an LLM, the data is likely to be too large with too many important attributes to enter it directly in the context window for the LLM. The alternative is to create vector embeddings — arrays of numeric values produced from the text by another pre-trained machine learning model (Morgan Stanley uses one from OpenAI called Ada). The vector embeddings are a more compact representation of this data which preserves contextual relationships in the text. When a user enters a prompt into the system, a similarity algorithm determines which vectors should be submitted to the GPT-4 model. Although several vendors are offering tools to make this process of prompt tuning easier, it is still complex enough that most companies adopting the approach would need to have substantial data science talent.

However, this approach does not need to be very time-consuming or expensive if the needed content is already present. The investment research company Morningstar, for example, used prompt tuning and vector embeddings for its Mo research tool built on generative AI. It incorporates more than 10,000 pieces of Morningstar research. After only a month or so of work on its system, Morningstar opened Mo usage to their financial advisors and independent investor customers. It even attached Mo to a digital avatar that could speak out its answers. This technical approach is not expensive; in its first month in use, Mo answered 25,000 questions at an average cost of $.002 per question for a total cost of $3,000.

Content Curation and Governance

As with traditional knowledge management in which documents were loaded into discussion databases like Microsoft Sharepoint, with generative AI, content needs to be high-quality before customizing LLMs in any fashion. In some cases, as with the Google Med-PaLM2 system, there are widely available databases of medical knowledge that have already been curated. Otherwise, a company needs to rely on human curation to ensure that knowledge content is accurate, timely, and not duplicated. Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system. Most companies that do not have well-curated content will find it challenging to do so for just this purpose.

Morgan Stanley has also found that it is much easier to maintain high quality knowledge if content authors are aware of how to create effective documents. They are required to take two courses, one on the document management tool, and a second on how to write and tag these documents. This is a component of the company’s approach to content governance approach — a systematic method for capturing and managing important digital content.

At Morningstar, content creators are being taught what type of content works well with the Mo system and what does not. They submit their content into a content management system and it goes directly into the vector database that supplies the OpenAI model.

Quality Assurance and Evaluation

An important aspect of managing generative AI content is ensuring quality. Generative AI is widely known to “hallucinate” on occasion, confidently stating facts that are incorrect or nonexistent. Errors of this type can be problematic for businesses but could be deadly in healthcare applications. The good news is that companies who have tuned their LLMs on domain-specific information have found that hallucinations are less of a problem than out-of-the-box LLMs, at least if there are no extended dialogues or non-business prompts.

Companies adopting these approaches to generative AI knowledge management should develop an evaluation strategy. For example, for BloombergGPT, which is intended for answering financial and investing questions, the system was evaluated on public dataset financial tasks, named entity recognition, sentiment analysis ability, and a set of reasoning and general natural language processing tasks. The Google Med-PaLM2 system, eventually oriented to answering patient and physician medical questions, had a much more extensive evaluation strategy, reflecting the criticality of accuracy and safety in the medical domain.

Life or death isn’t an issue at Morgan Stanley, but producing highly accurate responses to financial and investing questions is important to the firm, its clients, and its regulators. The answers provided by the system were carefully evaluated by human reviewers before it was released to any users. Then it was piloted for several months by 300 financial advisors. As its primary approach to ongoing evaluation, Morgan Stanley has a set of 400 “golden questions” to which the correct answers are known. Every time any change is made to the system, employees test it with the golden questions to see if there has been any “regression,” or less accurate answers.

Legal and Governance Issues

Legal and governance issues associated with LLM deployments are complex and evolving, leading to risk factors involving intellectual property, data privacy and security, bias and ethics, and false/inaccurate output. Currently, the legal status of LLM outputs is still unclear. Since LLMs don’t produce exact replicas of any of the text used to train the model, many legal observers feel that “fair use” provisions of copyright law will apply to them, although this hasn’t been tested in the courts (and not all countries have such provisions in their copyright laws). In any case, it is a good idea for any company making extensive use of generative AI for managing knowledge (or most other purposes for that matter) to have legal representatives involved in the creation and governance process for tuned LLMs. At Morningstar, for example, the company’s attorneys helped create a series of “pre-prompts” that tell the generative AI system what types of questions it should answer and those it should politely avoid.

User prompts into publicly-available LLMs are used to train future versions of the system, so some companies (Samsung, for example) have feared propagation of confidential and private information and banned LLM use by employees. However, most companies’ efforts to tune LLMs with domain-specific content are performed on private instances of the models that are not accessible to public users, so this should not be a problem. In addition, some generative AI systems such as ChatGPT allow users to turn off the collection of chat histories, which can address confidentiality issues even on public systems.

In order to address confidentiality and privacy concerns, some vendors are providing advanced and improved safety and security features for LLMs including erasing user prompts, restricting certain topics, and preventing source code and propriety data inputs into publicly accessible LLMs. Furthermore, vendors of enterprise software systems are incorporating a “Trust Layer” in their products and services. Salesforce, for example, incorporated its Einstein GPT feature into its AI Cloud suite to address the “AI Trust Gap” between companies who desire to quickly deploy LLM capabilities and the aforementioned risks that these systems pose in business environments.

Shaping User Behavior

Ease of use, broad public availability, and useful answers that span various knowledge domains have led to rapid and somewhat unguided and organic adoption of generative AI-based knowledge management by employees. For example, a recent survey indicated that more than a third of surveyed employees used generative AI in their jobs, but 68% of respondents didn’t inform their supervisors that they were using the tool. To realize opportunities and manage potential risks of generative AI applications to knowledge management, companies need to develop a culture of transparency and accountability that would make generative AI-based knowledge management systems successful.

In addition to implementation of policies and guidelines, users need to understand how to safely and effectively incorporate generative AI capabilities into their tasks to enhance performance and productivity. Generative AI capabilities, including awareness of context and history, generating new content by aggregating or combining knowledge from various sources, and data-driven predictions, can provide powerful support for knowledge work. Generative AI-based knowledge management systems can automate information-intensive search processes (legal case research, for example) as well as high-volume and low-complexity cognitive tasks such as answering routine customer emails. This approach increases efficiency of employees, freeing them to put more effort into the complex decision-making and problem-solving aspects of their jobs.

Some specific behaviors that might be desirable to inculcate — either though training or policies — include:

  • Knowledge of what types of content are available through the system;
  • How to create effective prompts;
  • What types of prompts and dialogues are allowed, and which ones are not;
  • How to request additional knowledge content to be added to the system;
  • How to use the system’s responses in dealing with customers and partners;
  • How to create new content in a useful and effective manner.

Both Morgan Stanley and Morningstar trained content creators in particular on how best to create and tag content, and what types of content are well-suited to generative AI usage.

“Everything Is Moving Very Fast”

One of the executives we interviewed said, “I can tell you what things are like today. But everything is moving very fast in this area.” New LLMs and new approaches to tuning their content are announced daily, as are new products from vendors with specific content or task foci. Any company that commits to embedding its own knowledge into a generative AI system should be prepared to revise its approach to the issue frequently over the next several years.

While there are many challenging issues involved in building and using generative AI systems trained on a company’s own knowledge content, we’re confident that the overall benefit to the company is worth the effort to address these challenges. The long-term vision of enabling any employee — and customers as well — to easily access important knowledge within and outside of a company to enhance productivity and innovation is a powerful draw. Generative AI appears to be the technology that is finally making it possible.

Advertisement

This post was originally published on this site

Continue Reading

AI

13 Principles for Using AI Responsibly

Published

on

The competitive nature of AI development poses a dilemma for organizations, as prioritizing speed may lead to neglecting ethical guidelines, bias detection, and safety measures. Known and emerging concerns associated with AI in the workplace include the spread of misinformation, copyright and intellectual property concerns, cybersecurity, data privacy, as well as navigating rapid and ambiguous regulations. To mitigate these risks, we propose thirteen principles for responsible AI at work.

Love it or loath it, the rapid expansion of AI will not slow down anytime soon. But AI blunders can quickly damage a brand’s reputation — just ask Microsoft’s first chatbot, Tay. In the tech race, all leaders fear being left behind if they slow down while others don’t. It’s a high-stakes situation where cooperation seems risky, and defection tempting. This “prisoner’s dilemma” (as it’s called in game theory) poses risks to responsible AI practices. Leaders, prioritizing speed to market, are driving the current AI arms race in which major corporate players are rushing products and potentially short-changing critical considerations like ethical guidelines, bias detection, and safety measures. For instance, major tech corporations are laying off their AI ethics teams precisely at a time when responsible actions are needed most.

It’s also important to recognize that the AI arms race extends beyond the developers of large language models (LLMs) such as OpenAI, Google, and Meta. It encompasses many companies utilizing LLMs to support their own custom applications. In the world of professional services, for example, PwC announced it is deploying AI chatbots for 4,000 of their lawyers, distributed across 100 countries. These AI-powered assistants will “help lawyers with contract analysis, regulatory compliance work, due diligence, and other legal advisory and consulting services.” PwC’s management is also considering expanding these AI chatbots into their tax practice. In total, the consulting giant plans to pour $1 billion into “generative AI” — a powerful new tool capable of delivering game-changing boosts to performance.

In a similar vein, KPMG launched its own AI-powered assistant, dubbed KymChat, which will help employees rapidly find internal experts across the entire organization, wrap them around incoming opportunities, and automatically generate proposals based on the match between project requirements and available talent. Their AI assistant “will better enable cross-team collaboration and help those new to the firm with a more seamless and efficient people-navigation experience.”

Slack is also incorporating generative AI into the development of Slack GPT, an AI assistant designed to help employees work smarter not harder. The platform incorporates a range of AI capabilities, such as conversation summaries and writing assistance, to enhance user productivity.

These examples are just the tip of the iceberg. Soon hundreds of millions of Microsoft 365 users will have access to Business Chat, an agent that joins the user in their work, striving to make sense of their Microsoft 365 data. Employees can prompt the assistant to do everything from developing status report summaries based on meeting transcripts and email communication to identifying flaws in strategy and coming up with solutions.

This rapid deployment of AI agents is why Arvind Krishna, CEO of IBM, recently wrote that, “[p]eople working together with trusted A.I. will have a transformative effect on our economy and society … It’s time we embrace that partnership — and prepare our workforces for everything A.I. has to offer.” Simply put, organizations are experiencing exponential growth in the installation of AI-powered tools and firms that don’t adapt risk getting left behind.

AI Risks at Work

Unfortunately, remaining competitive also introduces significant risk for both employees and employers. For example, a 2022 UNESCO publication on “the effects of AI on the working lives of women” reports that AI in the recruitment process, for example, is excluding women from upward moves. One study the report cites that included 21 experiments consisting of over 60,000 targeted job advertisements found that “setting the user’s gender to ‘Female’ resulted in fewer instances of ads related to high-paying jobs than for users selecting ‘Male’ as their gender.” And even though this AI bias in recruitment and hiring is well-known, it’s not going away anytime soon. As the UNESCO report goes on to say, “A 2021 study showed evidence of job advertisements skewed by gender on Facebook even when the advertisers wanted a gender-balanced audience.” It’s often a matter of biased data which will continue to infect AI tools and threaten key workforce factors such as diversity, equity, and inclusion.

Discriminatory employment practices may be only one of a cocktail of legal risks that generative AI exposes organizations to. For example, OpenAI is facing its first defamation lawsuit as a result of allegations that ChatGPT produced harmful misinformation. Specifically, the system produced a summary of a real court case which included fabricated accusations of embezzlement against a radio host in Georgia. This highlights the negative impact on organizations for creating and sharing AI generated information. It underscores concerns about LLMs fabricating false and libelous content, resulting in reputational damage, loss of credibility, diminished customer trust, and serious legal repercussions.

In addition to concerns related to libel, there are risks associated with copyright and intellectual property infringements. Several high-profile legal cases have emerged where the developers of generative AI tools have been sued for the alleged improper use of licensed content. The presence of copyright and intellectual property infringements, coupled with the legal implications of such violations, poses significant risks for organizations utilizing generative AI products. Organizations can improperly use licensed content through generative AI by unknowingly engaging in activities such as plagiarism, unauthorized adaptations, commercial use without licensing, and misusing Creative Commons or open-source content, exposing themselves to potential legal consequences.

The large-scale deployment of AI also magnifies the risks of cyberattacks. The fear amongst cybersecurity experts is that generative AI could be used to identify and exploit vulnerabilities within business information systems, given the ability of LLMs to automate coding and bug detection, which could be used by malicious actors to break through security barriers. There’s also the fear of employees accidentally sharing sensitive data with third-party AI providers. A notable instance involves Samsung staff unintentionally leaking trade secrets through ChatGPT while using the LLM to review source code. Due to their failure to opt out of data sharing, confidential information was inadvertently provided to OpenAI. And even though Samsung and others are taking steps to restrict the use of third-party AI tools on company-owned devices, there’s still the concern that employees can leak information through the use of such systems on personal devices.

On top of these risks, businesses will soon have to navigate nascent, varied, and somewhat murky regulations. Anyone hiring in New York City, for instance, will have to ensure their AI-powered recruitment and hiring tech doesn’t violate the City’s “automated employment decision tool” law. To comply with the new law, employers will need to take various steps such as conducting third-party bias audits of their hiring tools and publicly disclosing the findings. AI regulation is also scaling up nationally with the Biden-Harris administration’s “Blueprint for an AI Bill of Rights” and internationally with the EU’s AI Act, which will mark a new era of regulation for employers.

This growing nebulous of evolving regulations and pitfalls is why thought leaders such as Gartner are strongly suggesting that businesses “proceed but don’t over pivot” and that they “create a task force reporting to the CIO and CEO” to plan a roadmap for a safe AI transformation that mitigates various legal, reputational, and workforce risks. Leaders dealing with this AI dilemma have important decision to make. On the one hand, there is a pressing competitive pressure to fully embrace AI. However, on the other hand, a growing concern is arising as the implementation of irresponsible AI can result in severe penalties, substantial damage to reputation, and significant operational setbacks. The concern is that in their quest to stay ahead, leaders may unknowingly introduce potential time bombs into their organization, which are poised to cause major problems once AI solutions are deployed and regulations take effect.

For example, the National Eating Disorder Association (NEDA) recently announced it was letting go of its hotline staff and replacing them with their new chatbot, Tessa. However, just days before making the transition, NEDA discovered that their system was promoting harmful advice such as encouraging people with eating disorders to restrict their calories and to lose one to two pounds per week. The World Bank spent $1 billion to develop and deploy an algorithmic system, called Takaful, to distribute financial assistance that Human Rights Watch now says ironically creates inequity. And two lawyers from New York are facing possible disciplinary action after using ChatGPT to draft a court filing that was found to have several references to previous cases that did not exist. These instances highlight the need for well-trained and well-supported employees at the center of this digital transformation. While AI can serve as a valuable assistant, it should not assume the leading position.

Principles for Responsible AI at Work

To help decision-makers avoid negative outcomes while also remaining competitive in the age of AI, we’ve devised several principles for a sustainable AI-powered workforce. The principles are a blend of ethical frameworks from institutions like the National Science Foundation as well as legal requirements related to employee monitoring and data privacy such as the Electronic Communications Privacy Act and the California Privacy Rights Act. The steps for ensuring responsible AI at work include:

  • Informed Consent. Obtain voluntary and informed agreement from employees to participate in any AI-powered intervention after the employees are provided with all the relevant information about the initiative. This includes the program’s purpose, procedures, and potential risks and benefits.
  • Aligned Interests. The goals, risks, and benefits for both the employer and employee are clearly articulated and aligned.
  • Opt In & Easy Exits. Employees must opt into AI-powered programs without feeling forced or coerced, and they can easily withdraw from the program at any time without any negative consequences and without explanation.
  • Conversational Transparency. When AI-based conversational agents are used, the agent should formally reveal any persuasive objectives the system aims to achieve through the dialogue with the employee.
  • Debiased and Explainable AI. Explicitly outline the steps taken to remove, minimize, and mitigate bias in AI-powered employee interventions—especially for disadvantaged and vulnerable groups—and provide transparent explanations into how AI systems arrive at their decisions and actions.
  • AI Training and Development. Provide continuous employee training and development to ensure the safe and responsible use of AI-powered tools.
  • Health and Well-Being. Identify types of AI-induced stress, discomfort, or harm and articulate steps to minimize risks (e.g., how will the employer minimize stress caused by constant AI-powered monitoring of employee behavior).
  • Data Collection. Identify what data will be collected, if data collection involves any invasive or intrusive procedures (e.g., the use of webcams in work-from-home situations), and what steps will be taken to minimize risk.
  • Data. Disclose any intention to share personal data, with whom, and why.
  • Privacy and Security. Articulate protocols for maintaining privacy, storing employee data securely, and what steps will be taken in the event of a privacy breach.
  • Third Party Disclosure. Disclose all third parties used to provide and maintain AI assets, what the third party’s role is, and how the third party will ensure employee privacy.
  • Communication. Inform employees about changes in data collection, data management, or data sharing as well as any changes in AI assets or third-party relationships.
  • Laws and Regulations. Express ongoing commitment to comply with all laws and regulations related to employee data and the use of AI.

We encourage leaders to urgently adopt and develop this checklist in their organizations. By applying such principles, leaders can ensure rapid and responsible AI deployment.

Advertisement

This post was originally published on this site

Continue Reading

AI

3 Steps to Prepare Your Culture for AI

Published

on

The platform shift to AI is well underway. And while it holds the promise of transforming work and giving organizations a competitive advantage, realizing those benefits isn’t possible without a culture that embraces curiosity, failure, and learning. Leaders are uniquely positioned to foster this culture within their organizations today in order to set their teams up for success in the future. When paired with the capabilities of AI, this kind of culture will unlock a better future of work for everyone.

As business leaders, today we find ourselves in a place that’s all too familiar: the unfamiliar. Just as we steered our teams through the shift to remote and flexible work, we’re now on the verge of another seismic shift: AI. And like the shift to flexible work, priming an organization to embrace AI will hinge first and foremost on culture.

The pace and volume of work has increased exponentially, and we’re all struggling under the weight of it. Leaders and employees are eager for AI to lift the burden. That’s the key takeaway from our 2023 Work Trend Index, which surveyed 31,000 people across 31 countries and analyzed trillions of aggregated productivity signals in Microsoft 365, along with labor market trends on LinkedIn.

Nearly two-thirds of employees surveyed told us they don’t have enough time or energy to do their job. The cause of this drain is something we identified in the report as digital debt: the influx of data, emails, and chats has outpaced our ability to keep up. Employees today spend nearly 60% of their time communicating, leaving only 40% of their time for creating and innovating. In a world where creativity is the new productivity, digital debt isn’t just an inconvenience — it’s a liability.

AI promises to address that liability by allowing employees to focus on the most meaningful work. Increasing productivity, streamlining repetitive tasks, and increasing employee well-being are the top three things leaders want from AI, according to our research. Notably, amid fears that AI will replace jobs, reducing headcount was last on the list.

Becoming an AI-powered organization will require us to work in entirely new ways. As leaders, there are three steps we can take today to get our cultures ready for an AI-powered future:

Choose curiosity over fear

AI marks a new interaction model between humans and computers. Until now, the way we’ve interacted with computers has been similar to how we interact with a calculator: We ask a question or give directions, and the computer provides an answer. But with AI, the computer will be more like a copilot. We’ll need to develop a new kind of chemistry together, learning when and how to ask questions and about the importance of fact-checking responses.

Fear is a natural reaction to change, so it’s understandable for employees to feel some uncertainty about what AI will mean for their work. Our research found that while 49% of employees are concerned AI will replace their jobs, the promise of AI outweighs the threat: 70% of employees are more than willing to delegate to AI to lighten their workloads.

We’re rarely served by operating from a place of fear. By fostering a culture of curiosity, we can empower our people to understand how AI works, including its capabilities and its shortcomings. This understanding starts with firsthand experience. Encourage employees to put curiosity into action by experimenting (safely and securely) with new AI tools, such as AI-powered search, intelligent writing assistance, or smart calendaring, to name just a few. Since every role and function will have different ways to use and benefit from AI, challenge them to rethink how AI could improve or transform processes as they get familiar with the tools. From there, employees can begin to unlock new ways of working.

Embrace failure

AI will change nearly every job, and nearly every work pattern can benefit from some degree of AI augmentation or automation. As leaders, now is the time to encourage our teams to bring creativity to reimagining work, adopting a test-and-learn strategy to find ways AI can best help meet the needs of the business.

AI won’t get it right every time, but even when it’s wrong, it’s usefully wrong. It moves you at least one step forward from a blank slate, so you can jump right into the critical thinking work of reviewing, editing, or augmenting. It will take time to learn these new patterns of work and identify which processes need to change and how. But if we create a culture where experimentation and learning are viewed as a prerequisite to progress, we’ll get there much faster.

As leaders, we have a responsibility to create the right environment for failure so that our people are empowered to experiment to uncover how AI can fit into their workflows. In my experience, that includes celebrating wins as well as sharing lessons learned in order to help keep each other from wasting time learning the same lesson twice. Both formally and informally, carve out space for people to share knowledge — for example, by crowdsourcing a prompt guidebook within your department or making AI tips a standing agenda item in your monthly all-staff meetings. Operating with agility will be a foundational tenet of AI-powered organizations.

Become a learn-it-all

I often hear concerns that AI will be a crutch, offering shortcuts and workarounds that ultimately diminish innovation and engagement. In my mind, the potential for AI is so much bigger than that, and it will become a competitive advantage for those who use it thoughtfully. Those will become your most engaged and innovative employees.

The value you get from AI is only as good as what you put in. Simple questions will result in simple answers. But sophisticated, thought-provoking questions will result in more complex analysis and bigger ideas. The value will shift from employees who have all the right answers to employees who know how to ask the right questions. Organizations of the future will place a premium on analytical thinkers and problem-solvers who can effectively reason over AI-generated content.

At Microsoft, we believe a learn-it-all mentality will get us much farther than a know-it-all one. And while the learning curve of using AI can be daunting, it’s a muscle that has to be built over time — and that we should start strengthening today. When I talk to leaders about how to achieve this across their companies and teams, I tell them three things:

  • Establish guardrails to help people experiment safely and responsibly. Which tools do you encourage employees to use, and what data is — and isn’t — appropriate to input. What guidelines do they need to follow around fact-checking, reviewing, and editing?
  • Learning to work with AI will need to be a continuous process, not a one-time training. Infuse learning opportunities into your rhythm of business and keep employees up to date with the latest resources. For example, one team might block off Friday afternoons for learning, while another has monthly “office hours” for AI Q&A and troubleshooting. And think beyond traditional courses or resources. How can peer-to-peer knowledge sharing, such as lunch and learns or a digital hotline, play a role so people can learn from each other?
  • Embrace the need for change management. Being intentional and programmatic will be crucial for successfully adopting AI. Identify goals and metrics for success, and select AI champions or pilot program leads to help bring the vision to life. Different functions and disciplines will have different needs and challenges when it comes to AI, but one shared need will be for structure and support as we all transition to a new way of working.

The platform shift to AI is well underway. And while it holds the promise of transforming work and giving organizations a competitive advantage, realizing those benefits isn’t possible without a culture that embraces curiosity, failure, and learning. As leaders, we’re uniquely positioned to foster this culture within our organizations today in order to set our teams up for success in the future. When paired with the capabilities of AI, this kind of culture will unlock a better future of work for everyone.

Advertisement

This post was originally published on this site

Continue Reading

Trending

SmallBiz.com does not provide legal or accounting advice and is not associated with any government agency. Copyright © 2023 UA Services Corp - All Rights Reserved.