Security & Data Practices
We implement a comprehensive set of technical and organizational measures (TOMs) in line with GDPR Article 32 to ensure the confidentiality, integrity, and availability of customer data. Below we outline our operational security practices and controls:
Hosting & Architecture
Our production systems are hosted in a secure European data center. All servers are located in the EU (specifically in Germany) to keep data under EU jurisdiction. We use Contabo GmbH as our infrastructure provider, which operates multiple Tier-III+ data centers in Germany, ensuring robust physical security and environmental protections.
While we logically separate each breeder’s herd data, we aim to provide global registration functionality through ALPACUNA herds. This means that each breeder’s herd data is logically separated so that no breeder can access another’s information, preventing any cross-breeder data leaks. Data in registries on the other hand is available for all ALPACUNA customers to view. Access to the servers is restricted to authorized personnel only, and administrative access requires multi-factor authentication (see Access Controls below).
Encryption (In Transit & At Rest)
Data in Transit: All sensitive data transmissions are encrypted using TLS 1.3, the latest standard for transport security. We enforce HTTPS for all client-server communications and API calls. Using modern TLS ensures that data is well protected during transfer and cannot be intercepted or read by unauthorized parties.
Data at Rest: No full at-rest encryption is enabled yet on application/database volumes. Risk is mitigated through strict RBAC/least-privilege, enforced MFA for infrastructure access, tenant isolation, hardened hosts, short retention for nonessential data, and comprehensive logging.
Backups & Disaster Recovery
We perform regular backups of our production databases and critical systems to prevent data loss. Backups are taken daily, and each backup is retained for at least 7 days before secure deletion. This yields a Recovery Point Objective (RPO) of roughly 24 hours (at most 1 day of data could be lost in the worst case) and a conservative Recovery Time Objective (RTO) of about 18 hours or less for major incidents (the time to restore service from backups). These targets ensure we can restore availability of data in a timely manner, as required by GDPR’s availability and resilience principle.
Backups are stored in encrypted form and safeguarded against unauthorized access. We have a documented backup strategy that includes off-site or geo-redundant storage (where possible) to protect against localized failures.
Availability & Monitoring
We strive to keep our service highly available and minimize downtime. While we do not currently guarantee a specific uptime percentage via an SLA, our goal is to maintain continuous service availability. We have no fixed maintenance windows that require extended downtime; instead, updates and improvements are rolled out after thorough testing, typically with zero or minimal interruption to the service. Software updates are deployed as soon as new features are ready and tested, rather than on a slow periodic schedule, which helps us respond faster to issues and deliver improvements continuously.
Our infrastructure is under 24/7 monitoring using automated tools that track server health, performance, and security indicators. This monitoring allows us to react quickly to unplanned outages or anomalies, helping us achieve high uptime. In practice, any incidents are treated with high priority and resolved as soon as possible. Although small, our team is committed to rapid incident response whenever an availability issue arises.
Access Controls
We implement strict access controls to ensure that only authorized users can access data and functions necessary for their role. Our approach follows the principle of Role-Based Access Control (RBAC) combined with the least privilege principle. Each user account is assigned specific roles/permissions, and users are granted the minimum access needed to perform their duties – no more. This prevents privilege creep and limits potential damage if an account is compromised. In line with industry best practices, enforcing segregation of duties and least privilege is foundational to our security model.
Administrative Access: Administrative accounts and back-end infrastructure access are protected with additional safeguards. We require multi-factor authentication (MFA) for any administrator or developer access to production systems. For example, access to our cloud infrastructure at Contabo is secured with a two-factor authentication step (Contabo’s platform enforces 2FA for account login). This ensures that a stolen password alone cannot grant access to our servers.
User Access and Sessions: Our application’s user authentication is handled via Keycloak (our identity provider), which supports optional two-factor authentication for user logins. We encourage all users – and especially admins of our application – to enable 2FA (e.g. Time-based One-Time Password apps) for added account protection. Additionally, we enforce automatic session timeouts and logout for inactivity. User sessions will expire after a period of inactivity (to mitigate risks of abandoned but still-authenticated sessions being hijacked). This practice of expiring idle sessions is an important control to prevent unauthorized use of an unattended session. Repeated failed login attempts are rate-limited and can trigger account lockouts or alerts, to protect against brute-force attacks. In summary, access to data is tightly controlled through scoped roles, strong authentication, session management, and oversight of admin privileges.
Passwords & Authentication
All user authentication is handled securely using Keycloak, an open-source identity management system. Passwords in our system are never stored in plaintext – they are hashed with a modern, secure hashing algorithm. By default, Keycloak uses Argon2 for password hashing, which is the state-of-the-art algorithm recommended by OWASP and the winner of the 2015 Password Hashing Competition. Argon2 hashing, combined with proper salting and stretching, means that even if our password database were somehow accessed, the actual passwords would be extremely difficult to crack. This provides a strong layer of protection for user credentials.
We enforce a strong password policy for user accounts. Passwords must be a minimum of 8 characters in length (and we support and encourage much longer passphrases for better security). This aligns with modern NIST guidelines, which require at least 8 characters and even recommend 15+ characters for administrative or highly privileged accounts. Our system also permits a wide range of characters (letters, numbers, symbols) so users can create complex or passphrase-style passwords without unnecessary restrictions. We do not impose arbitrary composition rules that encourage weak patterns; instead, the emphasis is on length and uniqueness (and users are advised not to reuse passwords). If a password is too weak, our system can reject it, in line with recommended screening practices.
Single Sign-On (SSO): Our platform currently does not integrate with external SSO/SAML/OIDC providers – users authenticate directly with our Keycloak-managed credentials. (Keycloak itself supports SAML/OIDC federation, so we may offer SSO options to enterprise clients in the future, but none are enabled at this time.) Each user has a dedicated account within our system.
In addition to password-based login, as noted above, we support two-factor authentication (e.g. TOTP apps) for accounts, which users can configure for themselves via Keycloak’s account settings. This adds an extra layer of security on top of the password. Overall, our authentication mechanisms follow industry best practices: robust hashing for stored secrets, strong password requirements, optional MFA, and no legacy or insecure authentication protocols.
Logging & Auditing
We maintain detailed audit logs of security-relevant events within the system. Key events that are logged include: user login attempts (successful and failed), password changes, administrative actions (e.g. changes to configurations or user permissions), data export or deletion requests, and other sensitive operations. By recording these events, we have an audit trail that can be analyzed in case of any suspicious activity or security incident. For example, admin actions and changes to security settings are logged with timestamps and the initiating account, providing accountability.
Log Retention: We retain audit and security logs for a significant period to support investigations and compliance needs. Logs are kept for a minimum of 90 days in active storage. Keeping historical logs is important because in the event of a cybersecurity incident, sufficient log history is key to understanding what happened. Industry best practice is to retain security logs for at least 3–6 months (or more) to facilitate incident response and forensics. All log data is stored securely (with access controls) to ensure confidentiality.
We also log access to production infrastructure and databases – for example, any administrator shell access on production systems are logged and auditable. These measures enable us to conduct audits and demonstrate compliance with security requirements, as well as to quickly pinpoint the sequence of events in case of an incident.
Vulnerability & Patch Management
Staying up-to-date with security patches is a critical part of our operations. We have a process for vulnerability management that involves continuously monitoring for relevant security advisories and promptly applying updates to our software and dependencies. While we do not follow a rigid periodic update schedule, we manually track software versions and security patches for all components in our stack. This means our team keeps an eye on release notes, vulnerability databases (CVE reports), and vendor announcements for any issues that affect our system. When a critical vulnerability is discovered in any component we use, we treat patching it with high urgency.
In practice, we aim to apply critical security patches within days of their release, often within 24–72 hours for severe issues. High-severity vulnerabilities are addressed as soon as possible (typically within a few business days), and we prioritize fixes or mitigations for any flaw that could be exploited. Medium and low risk updates are scheduled and bundled with regular development cycles, but no less than every few weeks. This approach aligns with industry guidelines that recommend patching critical issues within a very short timeframe and high/medium issues within defined intervals.
Before deploying updates, we test them in our staging environment to ensure they do not introduce regressions. Our CI/CD pipeline assists in this (see Secure SDLC below), running automated tests on new code. We do not currently use automated dependency scanning tools in CI (such as Dependabot) – instead, our developers manually review dependency updates and apply them consciously, which allows for careful compatibility testing. However, we recognize the value of automation and are evaluating integrating security scanners for third-party libraries to alert us of outdated components (as highlighted by OWASP Top 10 risk of using vulnerable components).
Additionally, we maintain an internal inventory of all software components and their versions (a lightweight SBOM – Software Bill of Materials). This way, if a new CVE is announced, we can quickly identify if we are using the affected version and take action. Our change management procedures ensure that all updates (security patches or otherwise) go through code review and testing before deployment. In summary, even though we don’t have a fixed patch schedule (e.g. monthly patch Tuesday, etc.), we stay vigilant and responsive: security updates are applied without undue delay once available, which reduces exposure to known vulnerabilities.
Secure Development (SDLC) & Testing
Security is incorporated into our Software Development Life Cycle (SDLC) from code design to deployment. Our development team is small (two developers), but we follow strict processes to maintain code quality and security:
- Code Review (4-Eyes Principle): All changes to the codebase require at least one peer review before being merged. We practice the “two-person rule” for code: one developer writes the code, and another developer must review and approve it. This code review process helps catch bugs and potential security issues early. It enforces that at least two sets of eyes have looked at every significant change, reducing the likelihood of a security vulnerability slipping through.
- CI/CD Pipeline and Automated Testing: We have a continuous integration pipeline that runs automated tests (unit tests, integration tests) on every code commit and especially before any code is merged to the main branch. Only code that passes all tests is allowed to be merged and deployed. Our continuous delivery pipeline then automatically builds and deploys the application after tests pass. Deployments are typically automated, meaning there is less room for human error or oversight during releases. We also include checks in the pipeline (CI/CD gates) so that if any security tests or critical checks fail, the deployment is halted. This ensures that insecure code does not get promoted to production.
- Secrets Management: We handle sensitive configuration secrets (such as API keys, credentials for databases or third-party services) with care. Secrets are never stored in source code repositories. Instead, we use secure configuration management – for example, environment variables – to inject secrets at runtime. Access to these secrets is limited to the services or administrators that need them. We also routinely rotate secrets (passwords, API tokens) and have monitoring to ensure no secrets are accidentally exposed in logs or error messages. This practice aligns with good secret management hygiene (e.g., using secure vaults and not hard-coding credentials).
- Static Analysis & Dependency Scanning: (Planned) As a future improvement, we are looking into integrating static code analysis tools and dependency vulnerability scanners into our CI pipeline. This would provide an automated layer of security review to catch common coding flaws (like OWASP Top 10 issues) and alert on known vulnerable libraries. Given our small team, we currently perform manual reviews, but we recognize automated tools can bolster our security assurance.
- Penetration Testing & Audits: As part of our commitment to security, we plan to conduct periodic third-party penetration tests of our application and infrastructure. While not explicitly mandated by GDPR, regular external pen testing is considered a best practice to demonstrate proactive security compliance. At a minimum, we intend to have an independent security audit or pentest annually (the commonly recommended frequency for small businesses) to identify any vulnerabilities we might have missed. Penetration testing by specialized experts can uncover issues in authentication, access controls, injection flaws, etc., and we will use the results to remediate weaknesses and improve. In addition to external tests, we also perform our own internal security reviews and occasional secure code training to keep up with evolving threats.
Overall, our SDLC incorporates security at every step: from careful design, code review, testing, up to deployment and maintenance. Changes to production are managed and tracked, and any security-impacting changes go through appropriate approval. By enforcing these practices, we reduce the likelihood of introducing vulnerabilities and ensure a rapid response (via CI/CD) to fix any that are found.
Incident Response & Breach Notification
Even with strong preventive measures, we recognize that security incidents can happen. We have an Incident Response Plan in place that defines how we respond to and manage security incidents or data breaches. Key aspects of our incident response program include:
- Reporting and Detection: We encourage our employees, users, and partners to report any suspicious activity or potential security issues to our security team immediately (we provide a contact email for reporting incidents). Our monitoring and alerting systems (described above) also serve as automated incident detectors for events like service outages, repeated login failures, or anomalous system behavior. Once an alert is received or a report comes in, we initiate the incident response procedure without delay.
- Roles and Responsibilities: The incident response plan defines clear roles and responsibilities for handling an incident. For instance, our lead developer might act as the incident manager coordinating the technical response, while our data protection officer (or equivalent) would handle communications and regulatory notifications. We have an internal communication strategy to ensure all relevant team members are informed and working on their tasks (e.g., containment, eradication, recovery).
- Investigation and Containment: Upon detecting an incident, we first work to contain the issue (e.g., isolating affected systems, revoking compromised credentials) to prevent further damage. We then investigate to determine the nature and scope of the incident – which systems or data are impacted, and how. Throughout this process, we keep detailed records of the security incident, including what happened, timelines, and actions taken. These records are important for learning lessons and for compliance (GDPR requires documentation of breaches).
- Eradication and Recovery: After understanding the incident, we eradicate the threat (e.g., remove malware, patch vulnerabilities, close any security gaps that led to the incident). Then we work to restore systems to normal operation (which may involve reverting to last known good state or using backups if data was corrupted). We closely monitor systems for any sign of persistence of the threat.
- Notification & Legal Compliance: In the event a personal data breach occurs, we follow GDPR’s breach notification requirements. If a breach is likely to pose risk to individuals’ rights (e.g. personal data was accessed by an unauthorized party), we will notify the relevant supervisory authority (Data Protection Authority) without undue delay and within 72 hours of becoming aware of the breach. We have a process to gather the necessary information quickly (nature of breach, data affected, number of subjects, mitigation steps, etc.) to include in this notification as per Article 33. If the breach is significant, we will also inform the affected customers/data subjects without undue delay, especially if there is a high risk to their privacy (per Article 34 obligations). Our customer notification process includes informing clients of what happened, what data is involved, and what measures we are taking. We aim for transparency and timely communication in such cases.
- Post-Incident Review: After an incident is resolved, we perform a post-mortem analysis to identify what went wrong and how to prevent similar incidents in the future. We update our security measures and response plan based on lessons learned. For example, if the incident revealed a gap in our defenses or monitoring, we will address it (such as deploying a new security control or improving staff training).
Our incident response plan is periodically reviewed and updated as our system evolves or as new threats emerge. We also ensure that all team members are aware of the plan and trained on their roles – even as a small team, preparation is key. By having a defined plan and adhering to legal breach notification duties, we can react swiftly and effectively to security incidents, minimizing damage and fulfilling our obligations to users and authorities.
Data Lifecycle & Retention
We manage the lifecycle of personal data carefully, from the point it is entered into our system to its eventual deletion. Our policies cover data import, export, retention, and deletion in compliance with data protection principles:
- Data Import: When customers import data into our system (e.g., by uploading a dataset or via API), the data is processed securely and becomes subject to all the security controls described in this document (encryption, access control, etc.). We ensure any data ingestion channels (file uploads, import scripts) are secure and that data is validated and handled safely to prevent injection of malicious content.
- Data Export & Portability: We support data portability for our customers. Upon request (or via built-in features), users can export their data in a commonly used format. Exports are generated in a secure manner – we only allow authorized users to export the data belonging to their account, and such exports are typically delivered over secure channels (e.g., downloadable via HTTPS or sent via encrypted email). This allows clients to retrieve their data at any time, aligning with GDPR’s data portability rights. We also make available relevant metadata or audit logs to customers when required, to ensure transparency.
- Data Retention: We retain personal data only for as long as it is necessary for the purposes for which it was collected or as required by our contractual or legal obligations. Active customer data remains in our production databases for the duration of the customer’s use of our service. If a customer account becomes inactive or is terminated, we follow a retention schedule: typically, we will retain the data for a short grace period (for example, 30 days) in case the customer reactivates or for backup recovery purposes, and then permanently delete or anonymize the data. Certain data that may be required for legal compliance (e.g., billing records) might be kept for longer as mandated by law, but will be securely archived.
- Right to Erasure (Deletion): We have a defined deletion process to honor users’ “right to be forgotten”. When a customer (or end-user) requests deletion of their personal data, or when an account is deleted, we promptly erase the personal data from our production systems (unless retention is legally required). The data is removed from our databases and file stores. Additionally, we ensure that the data is also removed from our backups in the next backup rotation cycle. Because we retain backups for 7 days, any deleted data may persist in backup snapshots for up to 7 days, after which those backups expire and are deleted as well. We inform users of this backup retention if they request erasure, and we do not restore deleted personal data from backups unless absolutely necessary (e.g., for disaster recovery). All deletion processes are logged for audit purposes.
- Test & Development Data: In our development and testing environments, we do not use real personal data. It is now an industry standard (and our policy) that test data must be anonymized or synthetic. We either generate dummy data for testing or use anonymization techniques if we need a dataset structurally similar to production. This ensures that personal information is not inadvertently exposed or misused in non-production contexts. By avoiding the use of live data in testing, we reduce the risk of breaches during development and comply with GDPR’s data minimization and security requirements. On the rare occasion where production data might be needed to debug an issue, we would obtain appropriate approvals and apply transformations to pseudonymize or minimize the data. But as a rule, developers work with non-real data in their local and staging environments.
In summary, we manage data through its entire life cycle with privacy in mind – only keeping it as long as necessary, allowing users to obtain or remove it on request, and protecting it at every stage (including when it’s used for testing or moved between systems). We also have procedures for data portability and deletion to comply with user rights under GDPR.
Sub-processors
To provide our service, we rely on a limited number of sub-processors (external service providers) who may process personal data on our behalf. Each sub-processor is carefully vetted for security and privacy practices, bound by a Data Processing Agreement (DPA), and must meet our standards for data protection (as required by GDPR Art. 28). We currently use the following sub-processors:
- Contabo GmbH – Hosting & Infrastructure Provider. Contabo provides the physical server infrastructure (cloud VMs and storage) in Germany where our application and database run. Location: Frankfurt & Nuremberg, Germany (European Union). Data processed: All customer data and application data is stored on Contabo’s servers (databases, uploaded files, backups). Contabo acts only on our instructions via the hosting platform and does not access the data except for essential infrastructure maintenance. A DPA is in place with Contabo covering GDPR-required safeguards.
- Mollie B.V. – Payment Processing. Mollie processes payment-related data to complete subscriptions and transactions. Location: Amsterdam, Netherlands (EU/EEA). Data processed: payer/customer identification and contact data (e.g., name, email, billing address), transaction metadata (order ID, amount, currency, timestamps), payment method details and tokens/identifiers (e.g., last four digits or masked IBAN; we do not store full card data), fraud-prevention signals, and compliance information required under financial regulations. Mollie processes this data to execute payments on our behalf and, where required (e.g., AML/fraud prevention), as an independent controller under its own legal obligations. A DPA (per Mollie’s standard data processing terms) governs processing performed as our processor.
(We currently do not utilize any other external sub-processors handling personal data). Services like user analytics or email delivery are either not used or are handled in-house. If we integrate any new sub-processor in the future (for example, an email sending service or a support ticketing system that stores personal data), we will update our documentation and ensure they are held to the same security standards.
We maintain an up-to-date list of all sub-processors, including their purposes and locations, available on our website. We commit to informing our clients of any changes to this sub-processor list, adding or changing processors only with proper notice and, if required, consent. This transparency ensures our customers know who has access to their data.
All sub-processors are located in the EU/EEA or in jurisdictions with adequate protection. We do not engage any processor in a high-risk third country without appropriate safeguards.
Data Transfers (International)
No Third-Country Transfers: All personal data is processed and stored within the European Union. We do not transfer customer data to any country outside the EU/EEA in the course of providing our service. Our primary hosting is in Germany and any data processing by sub-processors also occurs in the EU. Therefore, no data exports to jurisdictions like the United States or other non-EU countries are performed, and no Standard Contractual Clauses (SCCs) are required for our data flows.
In the event that a specific data transfer outside the EU/EEA is ever required (for example, if at a client’s request data needs to be shared with an overseas third party, or if we later engage a sub-processor in a third country), we will ensure full compliance with GDPR Chapter V. This means we would only transfer data if an adequate protection mechanism is in place – such as an EU Commission adequacy decision for the destination country, or appropriate safeguards like SCCs (standard contractual clauses) together with additional measures as needed. We currently avoid such transfers altogether, which simplifies compliance and reduces risk.
By keeping data within the EU’s jurisdiction, we provide our customers assurance that their data is protected under EU privacy laws and avoids the complexities of international data transfers. Users’ personal data remains on EU soil from collection to deletion, under EU-standard protections.
Compliance Note: All the measures described above collectively ensure a level of security appropriate to the risk, as required by GDPR Article 32. We regularly review and update these technical and organizational measures to adapt to evolving threats and business needs. Our commitment is to protect customer data through robust security practices and to be transparent about how we do so. If you have any questions about our security or data protection practices, please contact our data protection or security team for more information.