<img src="https://secure.leadforensics.com/133892.png" alt="" style="display:none;">

This Responsible AI Use Policy aims to guide the ethical and responsible use of Artificial Intelligence (AI) technologies within Columbus Global. As we navigate the rapidly evolving landscape of AI, we acknowledge the potential of AI to transform our services, increase efficiency, and create value for our clients. However, we equally recognize the risks associated with its misuse, including data privacy breaches, bias in decision-making, and transparency issues. 
This policy applies to all our consultants involved in providing services to clients, setting forth the principles and guidelines to ensure AI is used responsibly, ethically, and in the best interests of all stakeholders.

Purpose and Scope

The purpose of this Responsible AI Use policy is to outline the guidelines for the ethical and responsible use of Artificial Intelligence (AI) by Columbus Global consultants. As a technology consultancy, it is our duty to ensure that our clients are provided with high-quality services while also addressing any potential risks associated with the use of AI.
This policy applies to all consultants employed by Columbus Global who are involved in the delivery of services, or in the development, deployment, or use of AI technologies on behalf of our clients. It is essential that all employees understand and adhere to this policy when working with AI systems.
AI represents a broad range of technologies which have the potential to greatly benefit society by automating tasks, improving decision making, and creating new opportunities. However, it also poses ethical concerns such as bias, discrimination, privacy violations, and loss of human control. Therefore, the scope of this policy extends to all forms of AI technology used by Columbus Global consultants in any context, including but not limited to client projects and research and development.
We are committed to responsible and transparent use of AI and believe that it is our responsibility to mitigate potential risks that may occur as these technologies become more prevalent in our day to day lives.

What is AI?

Artificial Intelligence (AI) is a branch of computer science that aims to create systems capable of performing tasks that would require human intelligence, such as understanding natural language, recognizing patterns, solving problems, and making decisions. In the workplace, AI can automate routine tasks, predict trends, provide insightful data analysis, and enhance decision-making processes. It's used in various domains, from finance to HR, to improve efficiencies, drive innovation, and create a competitive edge. However, the use of AI must be managed responsibly to prevent potential risks, including data privacy issues, algorithmic bias, and ethical concerns.
Artificial Intelligence can be segmented into different types, each with its own applications and capabilities. Here are some of the most common types:

  • Traditional Machine Learning: This is a type of AI where systems can learn from historical data, identify patterns, and make decisions with minimal human intervention. Examples of applications include predictive models in finance or health, and recommendation systems in e-commerce or entertainment.
  • Natural Language Processing (NLP): NLP allows systems to understand, interpret, and generate human language in a valuable way. It's widely used in chatbots, voice recognition systems and translation services.
  • Deep Learning: Deep learning models or artificial neural networks aim to mimic the function of neurons in the brain. Deep learning models are broad and can complete many tasks similar to traditional machine learning, but can leverage more data and therefore provide more accurate results.
  • Computer Vision: This type of AI processes and interprets visual data, enabling systems to understand and make decisions based on what they see. It's used in areas like autonomous driving, facial recognition, and image editing.
  • Reinforcement Learning: This is a type of machine learning where an agent learns to make decisions by taking actions in an environment to achieve a goal. It's commonly used in gaming, navigation, and robotics.
  • Generative AI: This type of AI can create content from scratch, such as images, sound, and text. By learning from a large amount of data, generative AI models can generate new data that is similar but not identical to the data it was trained on. This is commonly used in creating realistic computer graphics, synthesizing voice in different languages, or creating written content. These technologies make use of many other types of AI such as NLP and reinforcement learning to achieve their results.

Glossary of Key AI Terms

  • Algorithms: These are a set of rules or instructions followed by an AI or machine learning system to solve a problem or achieve a specific output. They can be seen as the 'recipe' that the system follows to learn from data and make decisions.
  • Models: In the context of AI, a model is the representation of what a machine learning system has learned from the training data. It's the output of the machine learning algorithm after it has been trained on the data.
  • Training Data: This is the initial set of data used to help the machine learning system learn and adjust its algorithms. It includes both the input data and the corresponding correct outputs.
  • Test Data: After the model has been trained using the training data, it's tested using a separate set of data, known as the test data, to evaluate its accuracy.
  • Bias: In AI, bias refers to errors or assumptions in the learning algorithm that lead to unfair outcomes or decisions. It can also refer to the skew in data that can lead to unfair or discriminatory results.
  • Transparency: In the context of AI, transparency refers to the openness and understandability of AI decision-making processes. A transparent AI system should allow humans to understand and interpret how it makes decisions, why a specific decision was made, and how potential errors can be corrected. This principle is crucial to building trust, mitigating bias, and ensuring accountability in AI systems.
  • Supervised Learning: A type of machine learning where the model is trained on a labelled dataset, meaning the correct answers (or outputs) are supplied with the input data.
  • Unsupervised Learning: A type of machine learning where the model is trained on an unlabelled dataset, meaning the model has to find patterns and relationships in the data on its own.
  • Large Language Models (LLMs): These are machine learning models trained on a large corpus of text data. They are able to generate human-like text by predicting the probability of a word given the previous words used in the text. Models like GPT-3 are examples of Large Language Models.
  • Generative Adversarial Networks (GANs): A class of machine learning systems invented by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game, thus, the 'adversarial'. These networks can generate new, synthetic instances of data that can pass for real data. GANs are often used for image generation.
  • Transformer Models: A type of model architecture used primarily in NLP. They use attention mechanisms and are known for their ability to handle long-range dependencies in text. Transformer models form the foundation of most generative AI.
  • Transfer Learning: A machine learning method where a model trained on one task is used as a starting point for a model on a second task. This is extensively used in NLP to fine-tune models on specific tasks.
  • Fine-tuning: In machine learning, fine-tuning is a process to take a model that has already been trained for one task and then tune or adapt the model to a similar task. Fine-tuning is usually faster and less resource-intensive than training a model from scratch.
  • Expert Systems: These are AI programs that use knowledge and reasoning procedures to solve problems that are difficult enough to require human expertise. Unlike other AI systems, they are designed to make decisions based on a set of rules and facts. These systems are designed to mimic the decision-making ability of a human expert, hence the name 'Expert Systems'.

Ethical Guidelines

At Columbus Global, we believe in the responsible and ethical use of AI technology. To ensure that our use of AI aligns with our values and principles, all consultants are required to adhere to the following ethical guidelines:

  1. Transparency: We should always strive to make our AI systems as transparent as possible. The decision-making process of the AI should be explainable and understandable by humans. In case of any unexpected behavior or output, people should be able to trace back the steps of the AI system to understand how it arrived at that decision.
  2. Fairness: Our AI systems should make decisions and predictions in a fair manner, without favoritism or discrimination. This means we should avoid using biased data for training our AI systems, and regularly test them for any signs of bias or discrimination.
  3. Bias Mitigation: In order to maintain fairness, we must actively work to mitigate bias in our AI systems. This includes using unbiased data when training our models, thoroughly testing our systems for bias, and correcting any bias we find. It is crucial that we consider the impact of our AI systems on all individuals and groups, and ensure they don't reinforce harmful stereotypes or discrimination.
  4. Non-Discrimination: We should always ensure that our AI systems do not discriminate against any individual or group based on their race, gender, sexual orientation, religion, or other protected characteristics. This includes using diverse and representative data when training our AI systems, and actively monitoring them for any discriminatory behavior.
  5. Privacy: Protecting the privacy of individuals is of utmost importance. We should respect and protect the privacy rights of all individuals by ensuring that our AI systems do not infringe upon these rights. This includes implementing strong data protection measures, only collecting and using data with proper consent, and anonymizing data whenever possible.
  6. Accountability: Accountability is key to maintaining trust in our AI systems. We should always be accountable for the decisions and actions of our AI systems. This means taking responsibility when our AI systems cause harm or behave inappropriately and taking steps to rectify the situation.
  7. Human Involvement in Decision Making: We must ensure that humans remain integral to the decision-making processes involving AI. Human judgment should be involved in validating the outputs of AI systems, particularly in high-stakes contexts. This ensures that our AI systems serve as tools to augment human decision-making, not replace it, fostering a collaborative relationship between humans and AI.

These ethical guidelines serve as a foundation for the responsible use of AI at Columbus Global. By adhering to these principles, we can ensure that our use of AI benefits society, while minimizing potential harm.

Approved AI Tools

At Columbus Global, we believe in leveraging the best resources and tools to ensure efficient and responsible AI use. As part of this policy, we have identified a list of approved AI tools which have been vetted for quality, safety, and compliance with our ethical guidelines. 
These tools are for use by Columbus consultants during their day-to-day work and are separate from tools used for Analytics and AI projects with clients. No client data is shared with ChatGPT and other Generative AI tools.

  1. Jasper.AI: Well-known for its advanced natural language processing capabilities, Jasper.AI is an AI writing tool that uses machine learning to generate high-quality content. It proves especially useful for content creation, idea generation, and enhancing productivity while maintaining the highest standards of data privacy and security.
  2. OpenAI & ChatGPT: Developed by OpenAI, ChatGPT is a state-of-the-art conversational agent that uses machine learning to understand and respond to human language. It can serve various functions, from answering queries to providing useful suggestions, and is an excellent tool for developing robust interactive applications.
  3. Microsoft Copilot: Microsoft Copilot is an AI companion integrated into Microsoft 365 applications. It aids in collaborative work, enhancing productivity by assisting with writing, editing, summarizing, and organizing. It's designed to help you complete tasks more efficiently and effectively.
    Columbus Global consultants are encouraged to leverage these approved tools while ensuring that their use adheres to the ethical guidelines and principles outlined in this policy. Columbus consultants must adhere to the rules set out in the Columbus Consultant AI Usage document, available on Columbus’s internal SharePoint.
    Any use of AI tools not explicitly listed here requires prior approval, ensuring our commitment to responsible AI use remains uncompromised

Bi-Annual Risk Assessment Guidelines

In order to maintain the ethical standards and principles outlined in this policy, Columbus Global will conduct a bi-annual risk assessment to evaluate potential risks associated with AI use, including topics such as bias, privacy implications, and misuse. The following guidelines are designed to provide a structured approach to the risk assessment:

  1. Evaluation of AI Systems: Every six months, all AI systems in use by Columbus Global will be evaluated to assess their functioning, performance, and alignment with our ethical guidelines. This includes AI systems used in client projects, internal processes, and those under development.
  2. Bias, Fairness & Transparency Assessment: The assessment will include an examination of the data used to train AI models and the results produced. Special attention will be given to identifying and rectifying potential biases that could adversely affect the fairness of the AI's decision-making process.
  3. Privacy Implications: The risk assessment will evaluate how each AI system handles personal data, ensuring that all data is anonymized and used in compliance with data privacy laws and regulations. The assessment will also include a review of the data protection measures in place.
  4. Misuse of AI: Potential misuse of AI systems will be evaluated by considering scenarios in which AI could be used in a way that conflicts with our ethical guidelines or could harm individuals or groups.
  5. Documentation of Findings: All findings from the risk assessment will be documented in a comprehensive report, including areas of potential risk, actions taken to mitigate the identified risks, and recommendations for future improvements.
  6. Review and Action: The report will be reviewed by the senior management team, and actions will be taken as needed based on the findings. This may include updating this policy, retraining AI models, or implementing additional safeguards.
  7. Communication: Findings from the risk assessment and subsequent actions taken will be communicated to all consultants to ensure transparency and collective responsibility.

Conducting these risk assessments bi-annually allows us to continuously monitor and mitigate potential risks, uphold our commitment to responsible AI use, and ensure we remain accountable to our consultants, clients, and the wider community.

Training and Development for Responsible AI Use

At Columbus Global, we recognize that the responsible use of AI is contingent on the knowledge and understanding of those who develop and use this technology. Therefore, we are committed to providing ongoing training and awareness programs for all consultants involved in AI development and use.

  1. Induction Training: All new consultants will undergo a comprehensive training module on responsible AI use as part of their induction program. This will cover the fundamentals of AI, potential risks and ethical considerations, our policy on responsible AI use, and compliance requirements.
  2. Regular Workshops: We will conduct regular workshops and seminars to keep our consultants updated on the latest developments in AI ethics, law, and best practices. These sessions, led by experts from within and outside Columbus Global, will serve as a forum for learning, discussion, and knowledge exchange.
  3. Online Learning Resources: A repository of online learning resources will be made available to all consultants. This will content on various aspects of AI, its ethical use, and the role of AI in digital transformation.
  4. Annual Refresher Course: All consultants will be required to complete an annual refresher course on responsible AI use. This will ensure that they remain cognizant of our AI ethical guidelines and are up-to-date with any changes in AI-related legal and regulatory compliance.
  5. AI Ethics Committee: We will establish an AI Ethics Committee that will be responsible for developing training content, curating learning resources, and overseeing the implementation of the training and development program. The committee will include representatives from different departments to ensure a comprehensive approach to AI training and development.

Through this comprehensive training and development program, we aim to foster a culture of responsible AI use within Columbus Global. By equipping our consultants with the necessary knowledge and skills, we are confident that we can navigate the challenges and opportunities presented by AI in an ethical and responsible manner.

Accountability and Oversight

At Columbus Global, we understand that the successful implementation of our Responsible AI Use Policy requires a robust system of accountability and oversight. To this end, we have established an AI Ethics Committee, responsible for ensuring adherence to the policy and monitoring the ethical use of AI within our organization. 
The AI Ethics Committee is comprised of representatives from different business lines and market units, providing a diverse and holistic perspective on AI use. The committee is tasked with reviewing AI applications, tools, and practices used within the organization and by our consultants at client sites. They will assess compliance with our policy, ensuring that our AI systems align with the principles discussed in the policy.
To monitor compliance, the AI Ethics Committee will utilize a combination of periodic reviews, audits, and assessments. These will include scrutinizing AI model development and data handling practices, auditing AI usage reports, and evaluating AI incident response records. 
In cases of non-compliance or breach of policy, the committee will take suitable action based on the severity of the breach. This could range from providing additional training and support to the affected consultants, to disciplinary action in severe cases. The committee will also be responsible for addressing any queries, concerns, or suggestions from our consultants with regard to AI use and ethics. 
Through this structure of accountability and oversight, Columbus Global aims to ensure that our AI practices are not only compliant with our policy but also uphold the highest ethical standards. We believe this is crucial in maintaining the trust and confidence of our clients, consultants, and the wider community.

External Communication

We understand the importance of transparent and effective communication with our external stakeholders - including customers, regulators, and the public - about our AI use. Our goal is to maintain high levels of transparency, ensuring that our stakeholders understand how we develop, use, and manage AI technologies.

  • Client Communication: We are committed to keeping our clients informed about the AI tools and solutions incorporated in their projects. We strive to provide clear, non-technical explanations about the AI's functionality, benefits, and potential risks. Transparency about data use, privacy, and security measures will be prioritized in client communication.
  • Regulatory Communication: Compliance with local and international regulations governing AI use is a top priority. We will work with relevant regulatory bodies to address any concerns and receive guidance on best practices,
  • Public Communication: Geared towards fostering an open dialogue with the public, our communication strategy will regularly update on our AI development and use, major breakthroughs, and policy changes. We aspire to demystify AI and its impacts, providing accessible and understandable information to the public. This communication will also highlight our ethical commitment to AI use.
  • Media and Press: Media releases will be used to announce significant achievements, partnerships, or changes in our AI landscape. We will ensure that all information released to the press is accurate, clear, and reflective of our commitment to responsible AI use.
  • Crisis Communication: In the event of any mishaps or controversial situations related to our AI use, we commit to a swift, transparent, and candid response. Our crisis communication strategy will involve acknowledging the situation, stating the actions taken, and reaffirming our commitment to ethical, responsible AI use. These actions will be compliant with required regulation.

In all our external communications, we will uphold our values of transparency, honesty, and accountability. We believe that open and proactive communication about our AI use is crucial in maintaining the trust of our stakeholders and public confidence in our services.

Policy Updates

At Columbus Global, we recognize that the landscape of AI and its associated ethical, legal and social implications are constantly evolving. Therefore, we commit to ensuring our Responsible AI Use Policy remains current, relevant, and in alignment with emerging trends, technologies, and best practices in AI use.

Our AI Ethics Committee will conduct a comprehensive review of this policy on a bi-annual basis. However, if significant developments in AI technology or related legislation occur, the committee may initiate an extraordinary review to ensure our policy remains up-to-date.
This review process will consider changes in AI technologies, advancements in ethical AI practices, feedback from our consultants and clients, and learnings from our bi-annual risk assessment. The revisions will also take into account any significant alterations in the legal and regulatory environment concerning AI use.

Following the review, any updates or modifications to the policy will be clearly communicated to all consultants and stakeholders. All consultants will be required to familiarize themselves with the updated policy and adhere to the modified guidelines. If necessary, additional training will be provided to ensure all consultants have a clear understanding of the changes and their implications on their work.

Through these regular updates, Columbus Global is committed to maintaining a Responsible AI Use Policy that is dynamic, adaptable, and effectively addresses the ethical challenges of evolving AI technologies.

right-arrow share search phone phone-filled menu filter envelope envelope-filled close checkmark caret-down arrow-up arrow-right arrow-left arrow-down