Cercle IA – AI Usage Policy

At Cercle IA, we view artificial intelligence (AI) as a major opportunity to transform the way we support our clients. Far from being a threat, AI is a strategic lever that, when used effectively, allows us to move faster, go further, and act with greater precision.

The role of AI is not to replace human intelligence, but to augment it: fostering the emergence of new ideas, informing decision-making, and strengthening the impact of our actions.

In practice, across our consulting missions and creative tasks, AI frees up time for inspiration, strategy, and personalised support.

This AI Usage Policy reflects our commitment to using AI in an ethical, effective, and transparent manner — in the service of our clients, our team, and our partners.

It sets out our objectives, guiding principles, procedures, and standards that shape our day-to-day practice, with full awareness of the risks and limitations of AI.

1. Our Objectives with AI

The use of artificial intelligence at Cercle IA serves three core purposes:

  • delivering greater value to our clients;
  • enabling the development of our team;
  • adapting our activities to new needs and best practices in the AI era.

In practice, these purposes translate into five operational objectives:

  1. Better understanding our clients and their markets
    • In-depth analysis of the competitive landscape
    • A nuanced understanding of audiences and their behaviours
    • Implementing automated strategic monitoring to continuously benchmark performance against the market
  2. Leveraging and adding value to shared knowledge
    • Structuring and synthesising information from documents, interviews, or raw data
    • Enriching that knowledge with relevant external sources
    • Building on project history to create a genuine organisational memory
  3. Analysing available data
    • Interpreting web, social media, and campaign performance
    • Detecting significant trends in client data (CRM, analytics)
    • Generating personalised reports and proactive alerts
    • Measuring the real-time impact of marketing actions
  4. Enabling trend anticipation
    • Detecting weak sector signals
    • Modelling probable market or audience developments
    • Simulating different strategic scenarios to shift from reactive to proactive
  5. Preserving our clients’ resources
    • Optimising the allocation of marketing budgets
    • Automating repetitive tasks to free up time for strategic thinking
    • Preventing reputational or performance risks
    • Accelerating decision-making through fast and reliable analysis

2. Our Guiding Principles in the Use of AI

Our approach to AI is built on seven key principles that guide all of our practices:

  1. Respect for privacy and the European regulatory framework
    • Compliance with GDPR, the AI Act, and anonymisation of sensitive data
    • Exclusive use of secure, paid, and properly configured tools
  2. Human accountability and human in the loop
    • Systematic validation by a human expert at each key stage
  3. Professional development and human fulfilment
    • Developing the creativity and skills of the team — never to jeopardise employment or surveil team members
  4. Environmental awareness
    • Responsible and efficient use of AI, avoiding unnecessary and energy-intensive usage
  5. Real value creation
    • AI is used only when a concrete benefit is delivered: time savings, greater precision, anticipation, or enhanced understanding
    • AI usage is not limited to cost reduction — we focus on solutions and applications that bring greater value to our clients
  6. Transparency and source selection
    • Cercle IA answers any client questions about the methodology used in a deliverable, and organises AI training sessions open to clients and teams through the Cercle IA programme
    • All our deliverables are considered AI-augmented, unless otherwise stated or expressly prohibited at a client’s request
    • Rigorous selection and verification of external sources, favouring recognised and validated ones
    • Systematic documentation of sources used to ensure traceability
    • Strict compliance with intellectual property rights: licence verification, use only of open-licence or rights-acquired data
  7. Technological neutrality
    • Tool selection based on relevance, without exclusive dependency
    • Resilience in the event of outages or degradation of an AI service

3. Best Practices and Examples

What we do

  • ✅ Systematic human validation (human in the loop)
  • ✅ Use of secure, GDPR-compliant tools
  • ✅ Anonymisation of sensitive data using the tools provided
  • ✅ Documentation and traceability of AI usage
  • ✅ Use of AI only when it delivers real value
  • ✅ Ongoing training for the team and client support
  • ✅ Mindful usage, with attention to environmental impact

What we do not do

  • ❌ No use of sensitive data in public AI tools
  • ❌ No deliverables generated without human validation
  • ❌ No blind reliance on AI for decision-making
  • ❌ No use of AI to surveil or jeopardise employment
  • ❌ No unnecessary or gimmicky automation
  • ❌ No dependency on a single vendor
  • ❌ No use of AI to deceive or spread misinformation

4. Awareness of Risks and Limitations

We acknowledge that artificial intelligence, despite its considerable benefits, carries risks and limitations that are essential to understand and manage.

6 identified risks and precautionary measures

  1. Data confidentiality and security. The use of external AI tools may expose sensitive information. We manage this risk through systematic anonymisation of client data, exclusive use of secure and GDPR-compliant tools, and the implementation of strict access management protocols.
  2. Algorithmic bias and discrimination. AI systems can reproduce or amplify biases present in their training data. We remain vigilant by diversifying our information sources, systematically validating results through human expertise, and regularly questioning our analyses to detect potential biases.
  3. Excessive technological dependency. Blind trust in AI or its systematic use could undermine our operational autonomy. We deliberately maintain alternative manual competencies, diversify our tools, and ensure that every consultant can carry out their work without AI assistance if needed.
  4. Accuracy and reliability of results. AI can generate incorrect information or “hallucinate” data. We enforce systematic human validation, cross-reference sources, and train our team to identify the limitations and potential errors of the tools we use.
  5. Impact on employment and skills. Automation could devalue certain human skills. We deliberately direct AI towards augmenting capabilities rather than replacing them, invest in ongoing training, and refocus our team members on high-value-added missions.
  6. Alignment and control issues. AI systems may develop unintended behaviours or pursue objectives that do not match human intentions. We maintain constant human oversight of all processes, clearly define the objectives of each AI application, and reserve the right to halt or modify any automated process that does not serve the client’s best interests.

Limitations

We acknowledge that AI does not replace human judgement, empathy, creativity, strategic vision, or deep sector expertise. In other words, it is a remarkable accelerator, but it never removes the need for critical thinking, contextual adaptation, or validation through professional experience.

We are also aware that the capabilities and limitations of AI are constantly evolving. What works today may stop working tomorrow — and vice versa. We therefore remain vigilant in our usage and do not rest on past knowledge.

These limitations are, to us, an asset, as they preserve the essence of consulting: the human relationship and contextual relevance.

5. Commitment from the Team, Partners, and Clients

  • Mandatory commitment: every team member and partner must adhere to and comply with this policy;
  • Contractual value: in our relationships with clients and partners, this AI Usage Policy carries the same weight as our general terms and conditions. In the event of a conflict with a client’s or partner’s AI policy, a point of balance must be agreed upon before collaboration begins.

6. About This Document

This AI Usage Policy is a living document that reflects our vision of AI as a performance tool.

Given the constant evolution of technologies and practices in the field of AI, we commit to:

  • Preserving and developing human expertise as the core of our value proposition
  • Training and supporting our team continuously in their own understanding of AI
  • Continuously adapting our practices to the evolution of AI and its regulatory framework
  • Evolving this policy through experience and dialogue with our clients, team, and partners

All comments and suggestions on this policy are welcome.

Notice: This AI Usage Policy was developed using artificial intelligence for brainstorming, drafting, and formatting — a concrete illustration of our “human in the loop” approach. The content has been fully reviewed, validated, and owned by the Cercle IA team, ensuring it faithfully reflects our values, expertise, and operational practices.

This policy draws partial inspiration from the “Responsible AI Manifesto for Marketing and Business” by the Marketing AI Institute, adapted to the specific context and priorities of Cercle IA.

Atelier IA
Chaque mardi à midi en ligne · 1h pour poser vos questions et choisir les bons outils.