✦ Introduction
Diraflow Ltd. is a high-performance AI training and evaluation partner delivering production-grade data solutions for today's most demanding machine learning systems. We specialise in scalable data annotation, human-in-the-loop training, and rigorous model evaluation — enabling organisations to build, refine, and deploy AI that performs reliably in real-world environments.
Built for speed and precision, our operations combine expert human talent with structured workflows, multi-layer quality assurance, and domain-specific expertise. We handle complex, high-volume datasets across vision, language, and multimodal AI, ensuring consistency, accuracy, and measurable performance gains at every stage.
This Policy Handbook sets out the standards, obligations, and expectations that govern how we operate — from how we handle training data to how we treat our people. These policies apply to all employees, contractors, researchers, and interns globally.
Core Principle
Every AI system we build must be safe, explainable, and fair. These are not constraints on our business — they are the foundation of it. We will not sacrifice ethics for speed, and we will not compromise safety for commercial pressure.
All staff are required to read this handbook in full, complete the annual policy acknowledgement, and raise any concerns with their line manager or the Ethics & Compliance team.
1.1 Purpose
This policy establishes the ethical principles that govern all AI research, training, and deployment activities at DiraflowAI. It ensures that every AI system we build respects human rights, avoids harm, and operates transparently.
1.2 Core Ethical Principles
Fairness & Non-Discrimination
- AI models must be tested for bias across protected characteristics including race, gender, age, religion, disability, and national origin before deployment.
- Training datasets must be audited for representation gaps. Datasets that skew heavily toward any demographic must be balanced or supplemented before use.
- Discriminatory outputs must be documented, reported to the Ethics team within 24 hours, and resolved before the model is released.
Transparency & Explainability
- All production AI systems must include model cards documenting intended use, known limitations, training data sources, and evaluation metrics.
- Clients must be informed when they are interacting with an AI system, not a human.
- Where technically feasible, model decisions must be explainable to affected users in plain language.
Human Oversight
- No AI system operating in high-stakes domains (healthcare, legal, financial, criminal justice) may make autonomous decisions without human review.
- Every deployed model must have a human escalation path and a documented shutdown procedure.
- All automated decisions affecting individuals must be logged and be subject to appeal.
Safety & Non-Maleficence
- AI systems must not be designed or trained to deceive, manipulate, surveil, or harm individuals or groups.
- Red-teaming and adversarial testing is mandatory for all public-facing models before release.
- Employees who identify unsafe model behaviour must report it immediately via the Safety Incident channel.
1.3 Prohibited AI Applications
DiraflowAI will not build, train, or supply AI systems for the following purposes:
- Mass surveillance or social scoring of populations
- Autonomous lethal weapons systems
- Deepfakes intended to deceive without consent
- Psychological manipulation targeting vulnerable groups
- Generation of child sexual abuse material
- Circumvention of democratic processes or electoral integrity
Violations
Any employee found to be developing, supporting, or concealing prohibited AI applications will face immediate disciplinary action up to and including termination and referral to law enforcement.
1.4 Ethics Review Process
All new AI projects must pass an Ethics Review Board (ERB) and the Data Protection Act (2019), the UNESCO Ethical Impact Assessment tool through and after development. High-risk projects additionally require review by the independent AI Ethics Board.
| Risk Level | Criteria | Review Required |
| Low | Internal tools, research prototypes | Team lead sign-off |
| Medium | Client-facing, limited scope | Ethics team review |
| High | Public deployment, sensitive domains | Ethics Board + Legal |
| Critical | Healthcare, legal, law enforcement AI | Full panel review + external audit |
2.1 Purpose
This policy governs how DiraflowAI acquires, stores, labels, uses, and disposes of data used in AI training. It ensures compliance with the Ethics Review Boards (ERB), the Data Protection Act (2019), the UNESCO Ethical Impact Assessment tool and other applicable data protection laws, and protects the privacy and rights of data subjects.
2.2 Data Acquisition
- All training data must have a documented legal basis for use: licensed purchase, explicit consent, public domain, or synthetic generation.
- Web-scraped data must be reviewed for copyright compliance, personal data, and sensitive content before ingestion into the training pipeline.
- Data acquisition from third-party vendors requires a signed Data Processing Agreement (DPA) and vendor due diligence review.
- Employee personal data may never be used as training data without explicit, documented consent.
2.3 Data Classification
| Class | Definition | Controls Required |
| Public | Openly available, no PII | Standard access controls |
| Internal | Proprietary, business-sensitive | Role-based access, encryption at rest |
| Confidential | Client data, licensed datasets | Strict need-to-know, audit logging |
| Restricted | PII, health data, biometric data | Encryption + anonymisation + DPO sign-off |
PII — Personally Identifiable Information | DPO — Data Protection Officer
2.4 Data Storage & Security
- All Confidential and Restricted data must be encrypted at rest.
- Training datasets must be stored in access-controlled data lakes. Access is granted on a project basis and reviewed quarterly.
- Personal data must be anonymised or pseudonymised before being used in model training wherever technically possible.
- Data retention periods must be defined at the time of acquisition. No training data may be retained beyond its authorised period.
2.5 Data Subject Rights
Where personal data is used in training, DiraflowAI must be able to comply with the following rights upon valid request:
- Right of access — provide a copy of personal data held within 30 days.
- Right to erasure — remove personal data from datasets and retrain or fine-tune affected models as required.
- Right to rectification — correct inaccurate personal data.
- Right to object — cease processing for specific purposes.
Data Breach Response
Any suspected data breach must be reported to the Data Protection Officer within 2 hours of discovery. Confirmed breaches involving personal data must be notified to the relevant supervisory authority within 72 hours or as guided otherwise by the law.
3.1 Purpose
This policy ensures that all AI research conducted at Diraflow is rigorous, reproducible, honest, and appropriately attributed. It governs internal research, published papers, conference submissions, and public model releases.
3.2 Research Standards
- All experimental results must be reproducible. Code, datasets (where permissible), and hyperparameters must be documented and archived for reference.
- Researchers must report results honestly, including negative results and failure cases. Selective reporting of only positive outcomes is a violation of this policy.
- Benchmark evaluations must use held-out test sets. Training on evaluation data (data contamination) must be disclosed if it occurs.
- Statistical claims must include confidence intervals, significance tests, and effect sizes where applicable.
3.3 Authorship & Attribution
- Authorship on published work must reflect genuine intellectual contribution. Honorary or gift authorship is prohibited.
- All contributors must be listed in acknowledgements. Third-party datasets, models, and code must be cited appropriately.
- The use of AI-generated text in research papers must be disclosed in accordance with the target journal or conference policy.
3.4 Intellectual Property
- All research outputs produced using Diraflow resources, time, or data are the intellectual property of Diraflow unless otherwise agreed in writing.
- Researchers wishing to publish must obtain written approval from the relevant authorities of Research at least 5 working days before submission.
- Open-source releases of models, code, or datasets require board member and Legal team review, and a defined licence before publication.
3.5 Research Safety Review
- Research into dual-use capabilities requires Safety Board review in relation to the Data Protection Act (2019) and the UNESCO Ethical Impact Assessment tool before continuation.
- Capability evaluations that reveal dangerous emergent behaviours must be reported to the Safety Board within 48 hours and may not be published without Board approval.
4.1 Purpose
Human annotators are central to Diraflow's training pipeline. This policy ensures that all annotators — employees, contractors, and third-party labelers — are treated fairly, paid equitably, and protected from psychological harm.
4.2 Fair Compensation
- All annotators must be paid at or above the prevailing living wage in their country of work. Diraflow does not participate in exploitative micro-task platforms where effective hourly pay falls below minimum wage.
- Piece-rate pay structures must be benchmarked quarterly to ensure effective hourly earnings remain fair.
- Payment must be made within 30 or 31 days of task completion for contractors and on the standard payroll cycle for employees, or otherwise agreed in writing.
4.3 Psychological Safety & Harmful Content
- Annotators must be informed before assignment if a task involves potentially distressing content (violence, abuse, self-harm, explicit material).
- No annotator may be required to review harmful content without prior consent, appropriate training, and ongoing wellbeing support.
- Exposure to graphic or disturbing content must be time-limited. Annotators may not be assigned to harmful-content tasks for more than 2 consecutive hours without a mandatory break.
- Free and confidential counselling must be available to all annotators working on sensitive content tasks.
- Annotators may decline distressing tasks without penalty or pay reduction.
4.4 Quality & Transparency
- Annotators must be given clear, unambiguous task guidelines and calibration examples before beginning work.
- Inter-annotator agreement must be tracked. Systematic disagreements must trigger guideline review, not annotator penalisation.
- Annotators must never be deceived about the purpose of their labelling work. The general intended use of the model must be disclosed.
5.1 Access Control
- Access to model weights, training data, and production systems is granted on a least-privilege basis and reviewed every 90 days.
- All staff must use multi-factor authentication (MFA) for all company systems. Password-only authentication is not permitted.
- Privileged access (root, admin, model weight download) must be time-limited, logged, and approved by a line manager.
- Departing employees and contractors must have all access revoked immediately or within 4 hours of their last working day.
5.2 Model Weight Security
- Trained model weights are classified as Confidential. Unauthorised copying, sharing, or export of weights is a serious disciplinary offence.
- Model weights must be stored in encrypted, access-controlled repositories. Local copies on personal devices are prohibited.
- Any suspected exfiltration of model weights must be reported to the CISO immediately.
5.3 Acceptable Use of AI Tools
- Employees may not input client data, proprietary source code, or confidential business information into public AI tools unless the tool has a signed enterprise data agreement.
- Use of AI coding assistants is permitted for general development tasks. All AI-generated code must be reviewed by a human engineer before merging.
- Employees must not use company AI systems to generate content that violates any other policy in this handbook.
5.4 Incident Response
Security incidents must be classified and responded to as follows:
| Severity | Example | Report Within | Escalate To |
| P1 — Critical | Personal data breach, major system compromise, financial fraud, critical service outage | 1 hour | CEO/MD, CISO, Legal Counsel, Data Protection Commissioner (Kenya) |
| P2 — High | Unauthorised access, ransomware, significant data exposure risk | 4 hours | Head of IT/Security, Senior Management, Compliance Officer |
| P3 — Medium | Policy violation, phishing attempt, suspicious activity with limited impact | 24 hours | IT Department, Information Security Team |
| P4 — Low | Suspicious email, minor anomaly, non-critical system issue | 72 hours | IT Support / Line Manager |
6.1 Equal Opportunity
Diraflow is an equal opportunity employer. We do not discriminate on the basis of race, colour, ethnicity, religion, gender, gender identity, sexual orientation, age, disability, pregnancy, marital status, or national origin in any employment decision.
- All job advertisements must use inclusive language and be reviewed by Operations before posting.
- Structured interviews with standardised scoring must be used for all roles. Unstructured interviews that rely solely on "culture fit" judgements are not permitted.
- Compensation bands for equivalent roles must be published internally. Pay equity audits must be conducted annually.
6.2 Anti-Harassment & Respectful Workplace
- Harassment, bullying, and intimidation of any kind are prohibited and will result in disciplinary action.
- Harassment includes but is not limited to: verbal abuse, unwanted physical contact, offensive jokes, exclusionary behaviour, and online harassment via company platforms.
- All reports of harassment will be investigated promptly, confidentially, and impartially. Retaliation against anyone who raises a concern is a separate disciplinary offence.
6.3 Conflicts of Interest
- Employees must disclose any financial interest, secondary employment, or personal relationship that could conflict with their duties at Diraflow.
- Employees may not work for direct competitors, clients, or partners without prior written approval from their manager and the People team.
6.4 Moonlighting & Intellectual Property Assignment
- Employees may engage in personal projects outside work hours provided they do not use Diraflow resources, compete with Diraflow, or derive from work performed at Diraflow.
- Any AI model, tool, or invention created using Diraflow resources, time, or data belongs to Diraflow and must be assigned accordingly unless agreed otherwise in writing.
6.5 AI-Specific Conduct Expectations
Given the nature of our work, all employees are additionally expected to:
- Refuse instructions to build or train AI systems that violate Policy 1.0.
- Raise safety concerns through official channels without fear of retaliation.
- Participate in mandatory AI ethics and safety training (minimum 8 hours per year).
- Not publicly comment on Diraflow's AI capabilities, risks, or model details without Communications approval.
7.1 Purpose
AI model training is computationally intensive and has a significant environmental footprint. Diraflow is committed to minimising our impact and operating transparently about our energy use and carbon emissions.
7.2 Compute Efficiency
- All large training runs must include a compute budget justification. Runs must not proceed if the projected improvement does not justify the compute cost.
- Training infrastructure must use the most energy-efficient hardware available. GPU utilisation rates must be monitored and optimised continuously.
- Idle compute resources must be shut down or scaled down automatically. Persistent idle compute is a wasteful use of energy and cost.
7.3 Renewable Energy
- Diraflow targets 100% renewable energy for all owned or directly contracted compute infrastructure by 2026.
- Cloud compute providers must be selected with preference given to those operating on renewable or carbon-neutral energy.
8.1 Purpose
Diraflow is committed to operating with integrity. This policy encourages and protects employees who raise concerns about unethical, illegal, or unsafe conduct — including concerns about the safety of AI systems we are building.
8.2 What to Report
- Violations of any policy in this handbook
- Unsafe or harmful AI development practices
- Fraud, bribery, or financial misconduct
- Data breaches or privacy violations
- Discrimination, harassment, or bullying
- Pressure to suppress or ignore safety concerns
8.3 How to Report
- Directly to your line manager or the Ethics & Compliance team.
- Anonymously via the independent Ethics Hotline.
- Via the confidential online reporting portal — contact diraflow.ai@gmail.com to request access.
8.4 Protections
Diraflow prohibits retaliation against any person who raises a concern in good faith. Retaliation includes demotion, dismissal, pay reduction, exclusion, and harassment. Any manager found to have retaliated against a whistleblower will face immediate disciplinary action.
Legal Protection
Whistleblowers may also be protected under applicable national laws where Diraflow operates, including Kenya's legal and regulatory framework: the Bribery Act 2016 (Kenya), the Leadership and Integrity Act 2012 (Kenya), and the Public Officer Ethics Act 2003 (Kenya), as well as any other relevant legislation that provides for the protection of persons who report misconduct or unethical practices.
All employees, contractors, and interns are required to sign this acknowledgement upon joining Diraflow and annually thereafter. The signed form must be submitted to the Operations Department.