Blog

How ethical AI balances innovation with integrity

How ethical AI balances innovation with integrity

How ethical AI balances innovation with integrity

It seems like only yesterday that artificial intelligence (AI) hit the scene. Now it seems like we’re constantly discussing why, how, and when we could integrate AI into every facet of our society. Like so many technological advancements before it, AI has the ability to fundamentally reshape how we live our lives personally and professionally. Depending on who you ask, this prospect is incredibly exciting, indescribably terrifying, or some mix of both. Regardless of where you may fall on that spectrum, one thing is certain: AI is here, and it’s here to stay. So rather than try to postpone the inevitable, many are championing a different approach to AI, one that says there can be and should be guardrails in place to ensure AI is a net positive for society. This approach is called ethical AI. But before you can have a productive conversation about how AI is used, it’s important to understand what AI is–and what it isn’t.


Understanding AI

AI is an umbrella term for a whole host of technologies and tools that operate in different ways and serve different purposes. Broadly speaking, AI is defined as any computer technology that seeks to perform tasks that previously required human intelligence. If that definition sounds vague to the point of being meaningless that’s because it is: there are simply too many different types of AI technologies to be documented in a blog post with any hope of being useful. That being said, most conversations around AI and ethical AI today are centered around tools that could be classified as being one (or more) of the following:
Machine Learning (ML)
Machine Learning algorithms are used to parse through large amounts of data to identify underlying patterns. As the name implies, these algorithms continuously learn and become more effective at identifying and deriving insights from those patterns. Machine Learning is by far the most pervasive form of AI tool used today. Simpler forms of ML algorithms are used in analytics tools and recommendation engines on streaming services, social media platforms, and online advertising. Recommendation engines compare how individual users interact with different forms of content against historical behaviors of similar users in order to predict which content will be most appealing.
More complex forms of ML algorithms make intuitive leaps in data intensive fields such as medicine and physics. In medicine, ML algorithms are already being used to identify new opportunities to improve differential diagnoses, and in physics scientists are using ML to identify new particles and detect complex quantum interactions. In hiring, ML can be used to analyze complex hiring data to determine which candidate skills lead to overall job performance. ML also underpin other types of advanced AI such as Generative AI and Natural Language Processing.


Natural Language Processing (NLP)
Natural Language Processing is a branch of machine learning with the goal of improving -an algorithm’s ability to interpret, understand, and manipulate language. Natural language processors ingest text in order to build models around semantics, syntax, and sentiment. You likely interact with several NLP systems every day: predictive text on your smartphone keyboard, grammar suggestions on your favorite word processor, and translation apps all use NLP algorithms.
Prior to the introduction of consumer-grade Generative AI, most search engines used NLP algorithms to perform a task called latent semantic indexing, which converted natural language searches into relevant keywords. This approach also aids in the hiring process to identify skills that may be missed by traditional keyword based ATS filters. Today one of the most popular forms of NLPs are Large Language Models, or LLMs. LLMs use massive amounts of text to build more robust language models that can engage in “discussion” with human users.


Generative AI
These days, when people debate about AI they are most-often referring to Generative AI. Generative AI tools are designed to provide different forms of content from user provided prompts. This content can be anything from the written word (ChatGPT, Google Gemini) to imagery (Dall-E, Adobe Firefly, Midjourney) to video (Sora, Runway). Generative AI tools are trained by using Machine Learning to analyze massive amounts of multimedia content.
Throughout the business world there’s currently a mad scramble to discover all the places where generative AI can be used to increase productivity. In the hiring process, for example, a generative LLM could be used to create custom candidate screening questions based on details from their resume. Generative AI is a hot topic in entertainment and publishing, mainly due to unclear data sources used for training and ongoing legal debates around copyright and ownership of AI-generated content. There’s also growing concern among professional artists that generative AI tools will be used to replace them rather than support them. These are concerns that should be addressed under an ethical AI framework.

What is ethical AI?

Ethical AI is an approach to the development and use of AI tools that prioritizes ethical guidelines, moral principles, and positive societal impact. This approach is achieved by implementing ethical and moral guardrails to the behavior and output of an AI tool, as well as having an ability to easily audit the tool’s decision making process and outputs. Any type of AI tool can be built and deployed under an ethical AI framework. One of the central appeals of ethical AI is that it encourages a human-centered approach to AI development which seeks to enrich people’s lives and improve society.
Though there’s no single agreed-upon standard for ethical AI principles, many AI ethicists and advisory bodies like UNESCO agree on the following principles:
  • Fairness and non-discrimination
AI tools behavior and output should be designed to reduce and, if possible, eliminate human biases around age, race, gender identity, economic class, religion, and nationality.
  • Privacy and data protection
AI tools should operate in strict accordance with data security and privacy regulations to safeguard the privacy and security of users. This includes providing secure storage for user data as well as allowing users to export or delete their data. This often includes safeguards to prevent user data from becoming training data for AI models.
  • Transparency and oversight
AI tools must be designed to operate with full transparency, allowing developers and users the ability to audit a tool’s decision making process to ensure it’s adhering to ethical principles and make corrections if it’s not.

How CLARA employs ethical AI

CLARA’s AI uses machine learning and natural language processing to evaluate and score job candidates based not only on job-specific experience but also on important qualities like critical thinking, learning ability, and distance traveled (resourcefulness, grit, resilience, etc.). CLARA is designed to reduce explicit and implicit biases from the candidate screening process and help hiring teams build a dynamic, more diverse, and impactful workforce. CLARA’s toolset is guided by the ethical AI principles of non-discrimination, privacy, and transparency.
  • Fairness and non-discrimination
CLARA’s AI models have been trained on high-quality, validated, and diverse data sets. All AI models used by CLARA are continuously monitored to prevent discriminatory biases from emerging over time. By design, information that could allow CLARA’s systems to make any determination about a candidate’s age, gender, race, religion, or origin is removed prior to evaluation. For example, the system does not show the name of the candidates, rather presenting them via their initials. CLARA also provides detailed explanations of its decision making process and candidate scoring so that both hiring professionals and candidates can be sure that the screening process is fair and equitable.
  • Privacy and data protection
CLARA’s AI doesn’t use any proprietary employer, company, or candidate data to train its models. Proprietary data is stored separately from training data so that the two never mix. All data is fully encrypted end-to-end during transmission and while at rest. CLARA’s team follows strict data access policies that put limitations on what personnel can access what data, as well as when and how they access it. As an added precaution, all interactions with sensitive data are monitored and logged, and CLARA performs regular security audits of both internal processes and the complete data pipeline.
  • Transparency and oversight
CLARA’s AI algorithms are designed to be transparent. Every match score includes an explanation of how the score was reached. This explanation allows users to evaluate how specific skills, qualifications, and other factors were calculated, allows users to make more informed hiring decisions, and adjust the AI algorithms as needed. As an added layer of oversight, CLARA is compliant with and intentionally in beyond transparency requirements as related to the use of AI in employment-related decision making to ensure it does not contribute to employment discrimination.


Work in progress

As AI technology advances, so does the need for more robust ethical frameworks. By taking a human-first approach, and maintaining a commitment to progressing ethical AI, we ensure our values shape technology, helping us use it in ways that reflect and uphold these values. While AI will continue to evolve, ethical practices—focused on fairness, transparency, and data security—allow us to keep humans in the loop to harness its transformative potential for everyone’s benefit.Got questions about how CLARA's AI works? Would you like to better understand how we help support your security and compliance initiatives? Get answers to our most common questions here.
// Salesloft Code // Zoom //RB2B