Connect with us

Tech

How AI Will Transform Project Management

Published

on

Sometime in the near future, the CEO of a large telecom provider is using a smartphone app to check on her organization’s seven strategic initiatives. Within a few taps, she knows the status of every project and what percentage of expected benefits each one has delivered. Project charters and key performance indicators are available in moments, as are each team member’s morale level and the overall buy-in of critical stakeholders.

She drills down on the “rebranding” initiative. A few months earlier, a large competitor had launched a new green brand, prompting her company to accelerate its own sustainability rollout. Many AI-driven self-adjustments have already occurred, based on parameters chosen by the project manager and the project team at the initiative’s outset. The app informs the CEO of every change that needs her attention — as well as potential risks — and prioritizes decisions that she must make, providing potential solutions to each.

Before making any choices, the CEO calls the project manager, who now spends most of his time coaching and supporting the team, maintaining regular conversations with key stakeholders, and cultivating a high-performing culture. A few weeks earlier the project had been slightly behind, and the app recommended that the team should apply agile techniques to speed up one project stream.

During the meeting, they simulate possible solutions and agree on a path forward. The project plan is automatically updated, and messages are sent informing affected team members and stakeholders of the changes and a projection of the expected results.

Thanks to new technologies and ways of working, a strategic project that could have drifted out of control — perhaps even to failure — is now again in line to be successful and deliver the expected results.

Back in the present, project management doesn’t always move along quite as smoothly, but this future is probably less than a decade away. To get there sooner, innovators and organizations should be investing in project management technology now.

Project Management Today and Path Forward

Every year, approximately $48 trillion are invested in projects. Yet according to the Standish Group, only 35% of projects are considered successful. The wasted resources and unrealized benefits of the other 65% are mind-blowing.

For years in our research and publications, we have been promoting the modernization of project management. One reason we have found why project success rates are so poor is the low level of maturity of technologies available for managing them. Most organizations and project leaders are still using spreadsheets, slides, and other applications that haven’t evolved much over the past few decades. These are adequate when you are measuring project success by deliverables and deadlines met, but they fall short in an environment where projects and initiatives are always adapting — and continuously changing the business. There has been improvement in project portfolio management applications, but planning and team collaboration capabilities, automation and “intelligent” features are still lacking.

If applying AI and other technological innovations to project management could improve the success ratio of projects by just 25%, it would equate to trillions of dollars of value and benefits to organizations, societies, and individuals. Each of the core the technologies described in the story above is ready — the only question now is how soon they will be effectively applied to project management.

Gartner’s research indicates that change is coming soon, predicting that by 2030, 80% of project management tasks will be run by AI, powered by big data, machine learning (ML), and natural language processing. A handful of researchers, such as Paul Boudreau in his book Applying Artificial Intelligence Tools to Project Management, and a growing number of startups, have already developed algorithms to apply AI and ML in the world of project management. When this next generation of tools is widely adopted, there will be radical changes.

6 Aspects of Project Management that Will Be Disrupted

We see these coming technological developments as an opportunity like none before. Organizations and project leaders that are most prepared for this moment of disruption will stand to reap the most rewards. Nearly every aspect of project management, from planning to processes to people, will be affected. Let’s take a look at six key areas.

1. Better selection and prioritization

Selection and prioritization are a type of prediction: which projects will bring the most value to the organization? When the correct data is available, ML can detect patterns that can’t be discerned by other means and can vastly exceed human accuracy in making predictions. ML-driven prioritization will soon result in:

  • Faster identification of launch-ready projects that have the right fundamentals in place
  • Selection of projects that have higher chances of success and delivering the highest benefits
  • A better balance in the project portfolio and overview of risk in the organization
  • Removal of human biases from decision-making

2. Support for the project management office

Data analytics and automation startups are now helping organizations streamline and optimize the role of the project management office (PMO). The most famous case is President Emmanuel Macron’s use of the latest technology to maintain up-to-date information about every French public-sector project. These new intelligent tools will radically transform the way PMOs operate and perform with:

  • Better monitoring of project progress
  • The capability to anticipate potential problems and to address some simple ones automatically
  • Automated preparation and distribution of project reports, and gathering of feedback
  • Greater sophistication in selecting the best project management methodology for each project
  • Compliance monitoring for processes and policies
  • Automation, via virtual assistants, of support functions such as status updates, risk assessment, and stakeholder analysis

3. Improved, faster project definition, planning, and reporting

One of the most developed areas in project management automation is risk management. New applications use big data and ML to help leaders and project managers anticipate risks that might otherwise go unnoticed. These tools can already propose mitigating actions, and soon, they will be able to adjust the plans automatically to avoid certain types of risks.

Similar approaches will soon facilitate project definition, planning, and reporting. These exercises are now time-consuming, repetitive, and mostly manual. ML, natural language processing, and plain text output will lead to:

  • Improved project scoping by automating the time-consuming collection and analysis of user stories. These tools will reveal potential problems such as ambiguities, duplicates, omissions, inconsistencies, and complexities.
  • Tools to facilitate scheduling processes and draft detailed plans and resource demands
  • Automated reporting that is not only produced with less labor but will replace today’s reports — which are often weeks old — with real-time data. These tools will also drill deeper than is currently possible, displaying project status, benefits achieved, potential slippage, and team sentiment in a clear, objective way.

4. Virtual project assistants

Practically overnight, ChatGPT changed the world’s understanding of how AI can analyze massive sets of data and generate novel and immediate insights in plain text. In project management, tools like these will power “bots” or “virtual assistants.” Oracle recently announced a new project management digital assistant, which provides instant status updates and helps users update time and task progress via text, voice or chat.

The digital assistant learns from past time entries, project planning data, and the overall context to tailor interactions and smartly capture critical project information. PMOtto is a ML-enabled virtual project assistant that is already in use. A user can ask PMOtto “Schedule John to paint the wall next week and allocate him full time to the task.” The assistant might reply, “Based on previous similar tasks allocated to John, it seems that he will need two weeks to do the work and not one week as you requested. Should I adjust it?”

5. Advanced testing systems and software

Testing is another essential task in most projects, and project managers need to test early and often. It’s rare today to find a project major project without multiple systems and types of software that must be tested before the project goes live. Soon, advanced testing systems that are now only feasible for certain megaprojects will become widely available.

The Elizabeth line, part of the Crossrail project in the United Kingdom, is a complex railway with new stations, new infrastructure, new tracks, and new trains; it was, therefore, important that every element of the project went through a rigorous testing and commissioning process to ensure safety and reliability. It required a never-before-seen combination of hardware and software, and after initial challenges, the project team developed the Crossrail Integration Facility. This fully automated off-site testing facility has proven invaluable in increasing systems’ efficiency, cost-effectiveness, and resilience. Systems engineer Alessandra Scholl-Sternberg describes some its features: “An extensive system automation library has been written, which enables complex set-ups to be achieved, health checks to be accurately performed, endurance testing to occur over extended periods and the implementation of tests of repetitive nature.” Rigorous audits can be run at the facility 24-7, free from the risk of operator bias.

Advanced and automated system testing solutions for software projects will soon allow early detection of defects and self-correcting processes. This will significantly reduce time spent on cumbersome testing activities, reduce the number of reworks, and ultimately, deliver easy-to-use and bug-free solutions.

6. A new role for the project manager

For many project managers, automating a significant part of their current tasks may feel scary, but successful ones will learn to use these tools to their advantage. Project managers will not be going away, but they will need to embrace these changes and take advantage of the new technologies. We currently think of cross-functional project teams as a group of individuals, but we may soon think of them as a group of humans and robots.

With a shift away from administrative work, the project manager of the future will need to cultivate strong soft skills, leadership capabilities, strategic thinking, and business acumen. They must focus on the delivery of the expected benefits and their alignment with strategic goals. They will also need a good understanding of these technologies. Some organizations are already building AI into their project management education and certification programs, and Northeastern University is incorporating AI into its curriculum, teaching project managers how to use AI to automate and improve data sets and optimize investment value from projects.

Data and People Make the Future a Reality

When these tools are ready for organizations, how will you make sure your organization is ready for them? Any AI adoption process begins with data, but you must not fail to prepare your people as well.

Training AI algorithms to manage projects will require large amounts of project-related data. Your organization may retain troves of historical project data, but they are likely to be stored in thousands of documents in a variety of file formats scattered around different systems. The information could be out-of-date, might use different taxonomies, or contain outliers and gaps. Roughly 80% of the time spent preparing a ML algorithm for use is focused on data gathering and cleaning, which takes raw and unstructured data and transforms it into structured data that can train a machine learning model.

Without available and properly managed data, the AI transformation will never happen at your organization — but no AI transformation will flourish if you don’t also prepare yourself and your team for the change.

This new generation of tools will not only change the technology on how we manage projects, but will change completely our work in the project. Project managers must be prepared to coach and train their teams to adapt to this transition. They should increase their focus on human interactions while identifying technology skill deficits in their people early and work to address them. In addition to focusing on project deliverables they should focus on creating high performing teams in which members receive what is needed to allow them to perform at their best.

If you are seriously considering applying AI to your projects and project management practices, the following questions will help you assess your decision.

  • Are you ready to spend time making an accurate inventory of all your projects, including the latest status update?
  • Can you invest several resources for some months to gather, clean, and structure your project data?
  • Have you made up your mind to let go of your old project management habits, such as your monthly progress reports?
  • Are you prepared to invest in training your project management community in this new technology?
  • Are they willing to move out of their traditional comfort zones and radically change how they manage their projects?
  • Is your organization ready to accept and adopt a new technology and hand over the reins on decisions with increasingly higher stakes?
  • Are you ready to let this technology make mistakes as it learns to perform better for your organization?
  • Does your executive sponsor for this project have the capability and credibility in your organization to lead this transformation?
  • Are senior leaders willing to wait several months, up to one year, to start seeing the benefits of the automation?

If the answer to all these questions is yes, then you are ready to embark on this pioneering transformation. If you have one or more “no” answers, then you need to work on flipping them to “yes” before moving ahead.

• • •

As we have seen, the application of artificial intelligence in project management will bring significant benefits, not only in the automation of administrative and low value tasks, but even more important, including AI and other disruptive technologies in your toolbox will in help your organization, its leaders and project managers select, define and implement projects more successfully.

The CEO in our story was once in the position you are in today. We encourage you to take the first steps toward this positive vision of future of project management now.

Advertisement

This post was originally published on this site

Continue Reading

AI

How to Train Generative AI Using Your Company’s Data

Published

on

Many companies are experimenting with ChatGPT and other large language or image models. They have generally found them to be astounding in terms of their ability to express complex ideas in articulate language. However, most users realize that these systems are primarily trained on internet-based information and can’t respond to prompts or questions regarding proprietary content or knowledge.

Leveraging a company’s propriety knowledge is critical to its ability to compete and innovate, especially in today’s volatile environment. Organizational Innovation is fueled through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how. However, knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings. As such, a company’s comprehensive knowledge is often unaccounted for and difficult to organize and deploy where needed in an effective or efficient way.

Emerging technologies in the form of large language and image generative AI models offer new opportunities for knowledge management, thereby enhancing company performance, learning, and innovation capabilities. For example, in a study conducted in a Fortune 500 provider of business process software, a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers. The system also expedited the learning and skill development of novice agents.

Like that company, a growing number of organizations are attempting to leverage the language processing skills and general reasoning abilities of large language models (LLMs) to capture and provide broad internal (or customer) access to their own intellectual capital. They are using it for such purposes as informing their customer-facing employees on company policy and product/service recommendations, solving customer service problems, or capturing employees’ knowledge before they depart the organization.

These objectives were also present during the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task. Today, however, generative AI is rekindling the possibility of capturing and disseminating important knowledge throughout an organization and beyond its walls. As one manager using generative AI for this purpose put it, “I feel like a jetpack just came into my life.” Despite current advances, some of the same factors that made knowledge management difficult in the past are still present.

The Technology for Generative AI-Based Knowledge Management

The technology to incorporate an organization’s specific domain knowledge into an LLM is evolving rapidly. At the moment there are three primary approaches to incorporating proprietary content into a generative model.

Training an LLM from Scratch

One approach is to create and train one’s own domain-specific model from scratch. That’s not a common approach, since it requires a massive amount of high-quality data to train a large language model, and most companies simply don’t have it. It also requires access to considerable computing power and well-trained data science talent.

One company that has employed this approach is Bloomberg, which recently announced that it had created BloombergGPT for finance-specific content and a natural-language interface with its data terminal. Bloomberg has over 40 years’ worth of financial data, news, and documents, which it combined with a large volume of text from financial filings and internet data. In total, Bloomberg’s data scientists employed 700 tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time. Few companies have those resources available.

Fine-Tuning an Existing LLM

A second approach is to “fine-tune” train an existing LLM to add specific domain content to a system that is already trained on general knowledge and language-based interaction. This approach involves adjusting some parameters of a base model, and typically requires substantially less data — usually only hundreds or thousands of documents, rather than millions or billions — and less computing time than creating a new model from scratch.

Google, for example, used fine-tune training on its Med-PaLM2 (second version) model for medical knowledge. The research project started with Google’s general PaLM2 LLM and retrained it on carefully curated medical knowledge from a variety of public medical datasets. The model was able to answer 85% of U.S. medical licensing exam questions — almost 20% better than the first version of the system. Despite this rapid progress, when tested on such criteria as scientific factuality, precision, medical consensus, reasoning, bias and harm, and evaluated by human experts from multiple countries, the development team felt that the system still needed substantial improvement before being adopted for clinical practice.

The fine-tuning approach has some constraints, however. Although requiring much less computing power and time than training an LLM, it can still be expensive to train, which was not a problem for Google but would be for many other companies. It requires considerable data science expertise; the scientific paper for the Google project, for example, had 31 co-authors. Some data scientists argue that it is best suited not to adding new content, but rather to adding new content formats and styles (such as chat or writing like William Shakespeare). Additionally, some LLM vendors (for example, OpenAI) do not allow fine-tuning on their latest LLMs, such as GPT-4.

Prompt-tuning an Existing LLM

Perhaps the most common approach to customizing the content of an LLM for non-cloud vendor companies is to tune it through prompts. With this approach, the original model is kept frozen, and is modified through prompts in the context window that contain domain-specific knowledge. After prompt tuning, the model can answer questions related to that knowledge. This approach is the most computationally efficient of the three, and it does not require a vast amount of data to be trained on a new content domain.

Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge. The goal was to provide the company’s financial advisors with accurate and easily accessible knowledge on key issues they encounter in their roles advising clients. The prompt-trained system is operated in a private cloud that is only accessible to Morgan Stanley employees.

While this is perhaps the easiest of the three approaches for an organization to adopt, it is not without technical challenges. When using unstructured data like text as input to an LLM, the data is likely to be too large with too many important attributes to enter it directly in the context window for the LLM. The alternative is to create vector embeddings — arrays of numeric values produced from the text by another pre-trained machine learning model (Morgan Stanley uses one from OpenAI called Ada). The vector embeddings are a more compact representation of this data which preserves contextual relationships in the text. When a user enters a prompt into the system, a similarity algorithm determines which vectors should be submitted to the GPT-4 model. Although several vendors are offering tools to make this process of prompt tuning easier, it is still complex enough that most companies adopting the approach would need to have substantial data science talent.

However, this approach does not need to be very time-consuming or expensive if the needed content is already present. The investment research company Morningstar, for example, used prompt tuning and vector embeddings for its Mo research tool built on generative AI. It incorporates more than 10,000 pieces of Morningstar research. After only a month or so of work on its system, Morningstar opened Mo usage to their financial advisors and independent investor customers. It even attached Mo to a digital avatar that could speak out its answers. This technical approach is not expensive; in its first month in use, Mo answered 25,000 questions at an average cost of $.002 per question for a total cost of $3,000.

Content Curation and Governance

As with traditional knowledge management in which documents were loaded into discussion databases like Microsoft Sharepoint, with generative AI, content needs to be high-quality before customizing LLMs in any fashion. In some cases, as with the Google Med-PaLM2 system, there are widely available databases of medical knowledge that have already been curated. Otherwise, a company needs to rely on human curation to ensure that knowledge content is accurate, timely, and not duplicated. Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system. Most companies that do not have well-curated content will find it challenging to do so for just this purpose.

Morgan Stanley has also found that it is much easier to maintain high quality knowledge if content authors are aware of how to create effective documents. They are required to take two courses, one on the document management tool, and a second on how to write and tag these documents. This is a component of the company’s approach to content governance approach — a systematic method for capturing and managing important digital content.

At Morningstar, content creators are being taught what type of content works well with the Mo system and what does not. They submit their content into a content management system and it goes directly into the vector database that supplies the OpenAI model.

Quality Assurance and Evaluation

An important aspect of managing generative AI content is ensuring quality. Generative AI is widely known to “hallucinate” on occasion, confidently stating facts that are incorrect or nonexistent. Errors of this type can be problematic for businesses but could be deadly in healthcare applications. The good news is that companies who have tuned their LLMs on domain-specific information have found that hallucinations are less of a problem than out-of-the-box LLMs, at least if there are no extended dialogues or non-business prompts.

Companies adopting these approaches to generative AI knowledge management should develop an evaluation strategy. For example, for BloombergGPT, which is intended for answering financial and investing questions, the system was evaluated on public dataset financial tasks, named entity recognition, sentiment analysis ability, and a set of reasoning and general natural language processing tasks. The Google Med-PaLM2 system, eventually oriented to answering patient and physician medical questions, had a much more extensive evaluation strategy, reflecting the criticality of accuracy and safety in the medical domain.

Life or death isn’t an issue at Morgan Stanley, but producing highly accurate responses to financial and investing questions is important to the firm, its clients, and its regulators. The answers provided by the system were carefully evaluated by human reviewers before it was released to any users. Then it was piloted for several months by 300 financial advisors. As its primary approach to ongoing evaluation, Morgan Stanley has a set of 400 “golden questions” to which the correct answers are known. Every time any change is made to the system, employees test it with the golden questions to see if there has been any “regression,” or less accurate answers.

Legal and Governance Issues

Legal and governance issues associated with LLM deployments are complex and evolving, leading to risk factors involving intellectual property, data privacy and security, bias and ethics, and false/inaccurate output. Currently, the legal status of LLM outputs is still unclear. Since LLMs don’t produce exact replicas of any of the text used to train the model, many legal observers feel that “fair use” provisions of copyright law will apply to them, although this hasn’t been tested in the courts (and not all countries have such provisions in their copyright laws). In any case, it is a good idea for any company making extensive use of generative AI for managing knowledge (or most other purposes for that matter) to have legal representatives involved in the creation and governance process for tuned LLMs. At Morningstar, for example, the company’s attorneys helped create a series of “pre-prompts” that tell the generative AI system what types of questions it should answer and those it should politely avoid.

User prompts into publicly-available LLMs are used to train future versions of the system, so some companies (Samsung, for example) have feared propagation of confidential and private information and banned LLM use by employees. However, most companies’ efforts to tune LLMs with domain-specific content are performed on private instances of the models that are not accessible to public users, so this should not be a problem. In addition, some generative AI systems such as ChatGPT allow users to turn off the collection of chat histories, which can address confidentiality issues even on public systems.

In order to address confidentiality and privacy concerns, some vendors are providing advanced and improved safety and security features for LLMs including erasing user prompts, restricting certain topics, and preventing source code and propriety data inputs into publicly accessible LLMs. Furthermore, vendors of enterprise software systems are incorporating a “Trust Layer” in their products and services. Salesforce, for example, incorporated its Einstein GPT feature into its AI Cloud suite to address the “AI Trust Gap” between companies who desire to quickly deploy LLM capabilities and the aforementioned risks that these systems pose in business environments.

Shaping User Behavior

Ease of use, broad public availability, and useful answers that span various knowledge domains have led to rapid and somewhat unguided and organic adoption of generative AI-based knowledge management by employees. For example, a recent survey indicated that more than a third of surveyed employees used generative AI in their jobs, but 68% of respondents didn’t inform their supervisors that they were using the tool. To realize opportunities and manage potential risks of generative AI applications to knowledge management, companies need to develop a culture of transparency and accountability that would make generative AI-based knowledge management systems successful.

In addition to implementation of policies and guidelines, users need to understand how to safely and effectively incorporate generative AI capabilities into their tasks to enhance performance and productivity. Generative AI capabilities, including awareness of context and history, generating new content by aggregating or combining knowledge from various sources, and data-driven predictions, can provide powerful support for knowledge work. Generative AI-based knowledge management systems can automate information-intensive search processes (legal case research, for example) as well as high-volume and low-complexity cognitive tasks such as answering routine customer emails. This approach increases efficiency of employees, freeing them to put more effort into the complex decision-making and problem-solving aspects of their jobs.

Some specific behaviors that might be desirable to inculcate — either though training or policies — include:

  • Knowledge of what types of content are available through the system;
  • How to create effective prompts;
  • What types of prompts and dialogues are allowed, and which ones are not;
  • How to request additional knowledge content to be added to the system;
  • How to use the system’s responses in dealing with customers and partners;
  • How to create new content in a useful and effective manner.

Both Morgan Stanley and Morningstar trained content creators in particular on how best to create and tag content, and what types of content are well-suited to generative AI usage.

“Everything Is Moving Very Fast”

One of the executives we interviewed said, “I can tell you what things are like today. But everything is moving very fast in this area.” New LLMs and new approaches to tuning their content are announced daily, as are new products from vendors with specific content or task foci. Any company that commits to embedding its own knowledge into a generative AI system should be prepared to revise its approach to the issue frequently over the next several years.

While there are many challenging issues involved in building and using generative AI systems trained on a company’s own knowledge content, we’re confident that the overall benefit to the company is worth the effort to address these challenges. The long-term vision of enabling any employee — and customers as well — to easily access important knowledge within and outside of a company to enhance productivity and innovation is a powerful draw. Generative AI appears to be the technology that is finally making it possible.

Advertisement

This post was originally published on this site

Continue Reading

Growing a Business

11 Ways Tech Adoption Impacts your Small Biz Growth

Published

on

Small businesses rely heavily on technology to drive development and innovation. Adopting the correct technological solutions can help to streamline processes, increase efficiency, improve client experiences, and create a competitive advantage in the market.

In this post, we will look at how technology contributes to the growth and success of small enterprises.

photo credit: Ali Pazani / Pexels

1. Streamlining Operations

Implementing small business technology solutions can automate and streamline various aspects of small business operations. This includes using project management software, customer relationship management (CRM) systems, inventory management tools, and accounting software. Streamlining operations not only saves time and reduces manual errors but also allows small businesses to allocate resources more efficiently.

Tip: Regularly assess your business processes and identify areas that can be automated or improved with technology. This continuous evaluation ensures that your technology solutions remain aligned with your evolving business needs.

2. Enhancing Customer Engagement

Technology enables small businesses to engage and connect with their customers more effectively. Social media platforms, email marketing software, and customer service tools allow businesses to communicate and build relationships with their target audience. Customer relationship management systems help businesses track customer interactions and preferences, providing insights to deliver personalized experiences and improve customer satisfaction.

Tip: Leverage data from customer interactions to create targeted marketing campaigns and personalized offers. Use automation tools to send timely and relevant messages to your customers, enhancing their engagement and loyalty.

3. Expanding Market Reach

The internet and digital marketing platforms provide small businesses with the opportunity to reach a broader audience beyond their local market. Creating a professional website, utilizing search engine optimization (SEO), and leveraging online advertising channels allow small businesses to attract and engage customers from different regions or even globally. E-commerce platforms enable businesses to sell products or services online, further expanding their market reach.

Tip: Continuously monitor and optimize your online presence to ensure your website is discoverable and user-friendly. Leverage analytics tools to track website traffic, visitor behavior, and conversion rates to make data-driven improvements.

Analyzing big data for decision making process

4. Improving Decision-Making with Data

Technology provides small businesses with access to valuable data and analytics, enabling informed decision-making. Through data analysis, businesses can gain insights into customer behavior, market trends, and operational performance. This data-driven approach allows small businesses to make strategic decisions, optimize processes, and identify growth opportunities more effectively.

Tip: Invest in data analytics tools and dashboards that can consolidate and visualize your business data. Regularly review and analyze the data to uncover patterns, identify bottlenecks, and make data-backed decisions to drive growth.

5. Facilitating Remote Work and Collaboration

Advancements in technology have made remote work and collaboration more feasible for small businesses. Cloud-based tools, project management software, and communication platforms enable teams to work together efficiently, regardless of geographical location. This flexibility opens up opportunities to access talent from anywhere, increase productivity, and reduce overhead costs.

Tip: Establish clear communication protocols and project management workflows to ensure effective collaboration among remote teams. Use video conferencing tools for virtual meetings and foster a culture of transparency and accountability to maintain productivity and engagement.

6. Embracing Emerging Technologies

Small businesses should stay informed about emerging technologies that have the potential to transform their industries. Technologies such as artificial intelligence, machine learning, blockchain, and the Internet of Things can offer new opportunities for growth and innovation. Being open to adopting and integrating these technologies into your business strategy can give you a competitive advantage.

7. Data Security and Privacy

Data security and privacy are critical considerations when using technology in small businesses. Implement robust cybersecurity measures, such as firewalls, encryption, and secure data storage, to protect sensitive customer information and intellectual property. Regularly update software and educate employees on best practices for data security to minimize the risk of data breaches.

Work with CRM system

8. Customer Relationship Management (CRM) Systems

A dedicated CRM system can help small businesses manage customer relationships more efficiently. It allows businesses to track customer interactions, store contact information, and monitor sales pipelines. Utilize CRM software to streamline sales and marketing processes, personalize customer interactions, and nurture long-term customer loyalty.

9. Continuous Learning and Skill Development

Encourage continuous learning and skill development among employees to keep up with technological advancements. Provide access to online courses, training resources, and workshops to enhance digital literacy and proficiency. Embrace a culture of learning and innovation to ensure your small business remains adaptable and competitive in the digital age.

10. Scalable and Flexible Technology Solutions

Choose technology solutions that are scalable and flexible to accommodate your growing business needs. Consider cloud-based software and platforms that allow you to easily scale up or down as your business evolves. This scalability enables small businesses to adapt to changing demands and seize new opportunities without significant disruptions.

11. Regular Technology Assessments

Regularly assess your technology infrastructure to ensure it aligns with your business goals and remains up to date. Conduct technology audits to identify areas for improvement, eliminate outdated systems, and explore new technologies that can drive growth. Stay proactive in evaluating and optimizing your technology stack to maximize its impact on your small business.

Businessman using biz tech solutions

Conclusion

Technology serves as a catalyst for small business growth. By leveraging technology effectively and staying agile in an ever-evolving digital landscape, small businesses can unlock their full potential, adapt to changing customer expectations, and drive sustainable growth.

This post was originally published on this site

Continue Reading

AI

13 Principles for Using AI Responsibly

Published

on

The competitive nature of AI development poses a dilemma for organizations, as prioritizing speed may lead to neglecting ethical guidelines, bias detection, and safety measures. Known and emerging concerns associated with AI in the workplace include the spread of misinformation, copyright and intellectual property concerns, cybersecurity, data privacy, as well as navigating rapid and ambiguous regulations. To mitigate these risks, we propose thirteen principles for responsible AI at work.

Love it or loath it, the rapid expansion of AI will not slow down anytime soon. But AI blunders can quickly damage a brand’s reputation — just ask Microsoft’s first chatbot, Tay. In the tech race, all leaders fear being left behind if they slow down while others don’t. It’s a high-stakes situation where cooperation seems risky, and defection tempting. This “prisoner’s dilemma” (as it’s called in game theory) poses risks to responsible AI practices. Leaders, prioritizing speed to market, are driving the current AI arms race in which major corporate players are rushing products and potentially short-changing critical considerations like ethical guidelines, bias detection, and safety measures. For instance, major tech corporations are laying off their AI ethics teams precisely at a time when responsible actions are needed most.

It’s also important to recognize that the AI arms race extends beyond the developers of large language models (LLMs) such as OpenAI, Google, and Meta. It encompasses many companies utilizing LLMs to support their own custom applications. In the world of professional services, for example, PwC announced it is deploying AI chatbots for 4,000 of their lawyers, distributed across 100 countries. These AI-powered assistants will “help lawyers with contract analysis, regulatory compliance work, due diligence, and other legal advisory and consulting services.” PwC’s management is also considering expanding these AI chatbots into their tax practice. In total, the consulting giant plans to pour $1 billion into “generative AI” — a powerful new tool capable of delivering game-changing boosts to performance.

In a similar vein, KPMG launched its own AI-powered assistant, dubbed KymChat, which will help employees rapidly find internal experts across the entire organization, wrap them around incoming opportunities, and automatically generate proposals based on the match between project requirements and available talent. Their AI assistant “will better enable cross-team collaboration and help those new to the firm with a more seamless and efficient people-navigation experience.”

Slack is also incorporating generative AI into the development of Slack GPT, an AI assistant designed to help employees work smarter not harder. The platform incorporates a range of AI capabilities, such as conversation summaries and writing assistance, to enhance user productivity.

These examples are just the tip of the iceberg. Soon hundreds of millions of Microsoft 365 users will have access to Business Chat, an agent that joins the user in their work, striving to make sense of their Microsoft 365 data. Employees can prompt the assistant to do everything from developing status report summaries based on meeting transcripts and email communication to identifying flaws in strategy and coming up with solutions.

This rapid deployment of AI agents is why Arvind Krishna, CEO of IBM, recently wrote that, “[p]eople working together with trusted A.I. will have a transformative effect on our economy and society … It’s time we embrace that partnership — and prepare our workforces for everything A.I. has to offer.” Simply put, organizations are experiencing exponential growth in the installation of AI-powered tools and firms that don’t adapt risk getting left behind.

AI Risks at Work

Unfortunately, remaining competitive also introduces significant risk for both employees and employers. For example, a 2022 UNESCO publication on “the effects of AI on the working lives of women” reports that AI in the recruitment process, for example, is excluding women from upward moves. One study the report cites that included 21 experiments consisting of over 60,000 targeted job advertisements found that “setting the user’s gender to ‘Female’ resulted in fewer instances of ads related to high-paying jobs than for users selecting ‘Male’ as their gender.” And even though this AI bias in recruitment and hiring is well-known, it’s not going away anytime soon. As the UNESCO report goes on to say, “A 2021 study showed evidence of job advertisements skewed by gender on Facebook even when the advertisers wanted a gender-balanced audience.” It’s often a matter of biased data which will continue to infect AI tools and threaten key workforce factors such as diversity, equity, and inclusion.

Discriminatory employment practices may be only one of a cocktail of legal risks that generative AI exposes organizations to. For example, OpenAI is facing its first defamation lawsuit as a result of allegations that ChatGPT produced harmful misinformation. Specifically, the system produced a summary of a real court case which included fabricated accusations of embezzlement against a radio host in Georgia. This highlights the negative impact on organizations for creating and sharing AI generated information. It underscores concerns about LLMs fabricating false and libelous content, resulting in reputational damage, loss of credibility, diminished customer trust, and serious legal repercussions.

In addition to concerns related to libel, there are risks associated with copyright and intellectual property infringements. Several high-profile legal cases have emerged where the developers of generative AI tools have been sued for the alleged improper use of licensed content. The presence of copyright and intellectual property infringements, coupled with the legal implications of such violations, poses significant risks for organizations utilizing generative AI products. Organizations can improperly use licensed content through generative AI by unknowingly engaging in activities such as plagiarism, unauthorized adaptations, commercial use without licensing, and misusing Creative Commons or open-source content, exposing themselves to potential legal consequences.

The large-scale deployment of AI also magnifies the risks of cyberattacks. The fear amongst cybersecurity experts is that generative AI could be used to identify and exploit vulnerabilities within business information systems, given the ability of LLMs to automate coding and bug detection, which could be used by malicious actors to break through security barriers. There’s also the fear of employees accidentally sharing sensitive data with third-party AI providers. A notable instance involves Samsung staff unintentionally leaking trade secrets through ChatGPT while using the LLM to review source code. Due to their failure to opt out of data sharing, confidential information was inadvertently provided to OpenAI. And even though Samsung and others are taking steps to restrict the use of third-party AI tools on company-owned devices, there’s still the concern that employees can leak information through the use of such systems on personal devices.

On top of these risks, businesses will soon have to navigate nascent, varied, and somewhat murky regulations. Anyone hiring in New York City, for instance, will have to ensure their AI-powered recruitment and hiring tech doesn’t violate the City’s “automated employment decision tool” law. To comply with the new law, employers will need to take various steps such as conducting third-party bias audits of their hiring tools and publicly disclosing the findings. AI regulation is also scaling up nationally with the Biden-Harris administration’s “Blueprint for an AI Bill of Rights” and internationally with the EU’s AI Act, which will mark a new era of regulation for employers.

This growing nebulous of evolving regulations and pitfalls is why thought leaders such as Gartner are strongly suggesting that businesses “proceed but don’t over pivot” and that they “create a task force reporting to the CIO and CEO” to plan a roadmap for a safe AI transformation that mitigates various legal, reputational, and workforce risks. Leaders dealing with this AI dilemma have important decision to make. On the one hand, there is a pressing competitive pressure to fully embrace AI. However, on the other hand, a growing concern is arising as the implementation of irresponsible AI can result in severe penalties, substantial damage to reputation, and significant operational setbacks. The concern is that in their quest to stay ahead, leaders may unknowingly introduce potential time bombs into their organization, which are poised to cause major problems once AI solutions are deployed and regulations take effect.

For example, the National Eating Disorder Association (NEDA) recently announced it was letting go of its hotline staff and replacing them with their new chatbot, Tessa. However, just days before making the transition, NEDA discovered that their system was promoting harmful advice such as encouraging people with eating disorders to restrict their calories and to lose one to two pounds per week. The World Bank spent $1 billion to develop and deploy an algorithmic system, called Takaful, to distribute financial assistance that Human Rights Watch now says ironically creates inequity. And two lawyers from New York are facing possible disciplinary action after using ChatGPT to draft a court filing that was found to have several references to previous cases that did not exist. These instances highlight the need for well-trained and well-supported employees at the center of this digital transformation. While AI can serve as a valuable assistant, it should not assume the leading position.

Principles for Responsible AI at Work

To help decision-makers avoid negative outcomes while also remaining competitive in the age of AI, we’ve devised several principles for a sustainable AI-powered workforce. The principles are a blend of ethical frameworks from institutions like the National Science Foundation as well as legal requirements related to employee monitoring and data privacy such as the Electronic Communications Privacy Act and the California Privacy Rights Act. The steps for ensuring responsible AI at work include:

  • Informed Consent. Obtain voluntary and informed agreement from employees to participate in any AI-powered intervention after the employees are provided with all the relevant information about the initiative. This includes the program’s purpose, procedures, and potential risks and benefits.
  • Aligned Interests. The goals, risks, and benefits for both the employer and employee are clearly articulated and aligned.
  • Opt In & Easy Exits. Employees must opt into AI-powered programs without feeling forced or coerced, and they can easily withdraw from the program at any time without any negative consequences and without explanation.
  • Conversational Transparency. When AI-based conversational agents are used, the agent should formally reveal any persuasive objectives the system aims to achieve through the dialogue with the employee.
  • Debiased and Explainable AI. Explicitly outline the steps taken to remove, minimize, and mitigate bias in AI-powered employee interventions—especially for disadvantaged and vulnerable groups—and provide transparent explanations into how AI systems arrive at their decisions and actions.
  • AI Training and Development. Provide continuous employee training and development to ensure the safe and responsible use of AI-powered tools.
  • Health and Well-Being. Identify types of AI-induced stress, discomfort, or harm and articulate steps to minimize risks (e.g., how will the employer minimize stress caused by constant AI-powered monitoring of employee behavior).
  • Data Collection. Identify what data will be collected, if data collection involves any invasive or intrusive procedures (e.g., the use of webcams in work-from-home situations), and what steps will be taken to minimize risk.
  • Data. Disclose any intention to share personal data, with whom, and why.
  • Privacy and Security. Articulate protocols for maintaining privacy, storing employee data securely, and what steps will be taken in the event of a privacy breach.
  • Third Party Disclosure. Disclose all third parties used to provide and maintain AI assets, what the third party’s role is, and how the third party will ensure employee privacy.
  • Communication. Inform employees about changes in data collection, data management, or data sharing as well as any changes in AI assets or third-party relationships.
  • Laws and Regulations. Express ongoing commitment to comply with all laws and regulations related to employee data and the use of AI.

We encourage leaders to urgently adopt and develop this checklist in their organizations. By applying such principles, leaders can ensure rapid and responsible AI deployment.

Advertisement

This post was originally published on this site

Continue Reading

Trending

SmallBiz.com does not provide legal or accounting advice and is not associated with any government agency. Copyright © 2023 UA Services Corp - All Rights Reserved.