Security Brief
Company
Civilized Discourse Construction Kit, Inc
8 The Green
Suite #8383
Dover, DE 19901
Organizational Security
-
Security policies are at: discourse/SECURITY.md - GitHub
-
Company has over 100 employees, in more than a dozen different time zones. All work remotely from home. There is no “central office”
-
We have had multiple third party security audits in the past, available on request. Our most recent audit can be found at discourse.org/forms/pen-test
Asset Classification
-
Our colocated servers, which run all hosting services, are:
- in the USA: Hurricane Electric in their Fremont, California data center and Equinix in their Seattle, Washington data center
- in the EU: Equinix in their Dublin, Ireland data centre
- in Canada: Equinix in their Toronto, Ontario data centre
-
Our policy is to run the latest version of the operating system (macOS, Debian, Ubuntu, or Windows) with latest-up-to-date security patches, patching automatically through the built in update systems in each OS.
-
All laptop systems that leave the home offices of our employees use full disk encryption, BitLocker on Windows, FileVault on macOS, or dm-crypt using
LUKS
on Linux -
All mobile devices are either iOS or Android, and have full-device encryption which is now standard on iOS and Android devices.
Personnel Security
-
We have formal employment policies
-
We have a formal security training program
Physical Security
-
For our colocated servers, you can read about Hurricane Electric and Equinix’s physical security on their website
-
All data is encrypted at rest
-
We do not establish physical security policies for people’s homes, as all of our employees work from home
Environmental Security
-
The main risk is our single data center in each geographic region other than the United States
-
Our team is in many different timezones so risk of local disaster affecting employees is low
Network Security
-
All access to our colocated servers is behind two firewalls:
- the Hurricane Electric routers (in all locations) block all non-operational traffic
- our Linux stateful firewalls
-
SSH private keys are the responsibility of employees. Public keys are recorded in Git, so we can tell if they change
-
We can invalidate any employee key on all servers in at most 20 minutes via Puppet
-
All changes to operational control data are peer reviewed
-
All customer database backups are encrypted at rest
-
We do not store customer data on our local workstations unless required
- situations where this is required must be approved by management and the data is removed from local workstations as soon as possible
-
In addition to Rails and Nginx logs, we store all syslog data that any of our servers generate for at least 1 month when possible
-
We do monthly automated security scans through Detectify; reports available on request
-
We have a public vulnerability disclosure bounty program at HackerOne; you can browse all security related checkins in Discourse using this public search of our GitHub open source code: https://github.com/discourse/discourse/search?q=SECURITY&type=Commits
-
Our HTTPS hosting gets an A+ SSL rating from ssllabs.com, see the most recent result to verify.
Incident Response
-
Our SLA is 99-99.9% uptime (depending on hosting tier)
-
We have a private email address for urgent Enterprise customer support
-
We monitor our own systems internally for outages, as well as external monitoring services for public HTTP/HTTPS
-
Unacknowledged alerts are escalated to on-call team members in a follow-the-sun manner
-
All our monitoring tools report to our online chat system, which as a remote team in different timezones has excellent throughout-the-day coverage as this is the primary way we communicate with each other
-
We have formal incident response and resolution policies
Access Control
-
All access to our colocated servers is through SSH jump boxes
-
Password authentication is disabled on all servers. Per-user SSH keypairs are required to access any server. You then need an additional per-user password to get root.
-
Elevated privileges are granted to team members with a legitimate need for them for the purposes of support or system maintenance.
-
Physical access to Hurricane Electric or Equinix colocation facilities requires government ID and being on the “approved” access list for our account
System & Information Integrity
-
Our hosting is redundant on multiple colocated servers for database, web, and routing
-
The loss of any individual server will not cause an outage
-
All our co-located physical servers have mirrored drive arrays for redundancy
-
We ship encrypted customer data to Amazon S3 twice daily for secure offsite backup
-
Our S3 encrypted backup keys are stored in private Git repositories on our physical servers
Confidentiality
- Our privacy policy is linked from our website footer at https://www.discourse.org/privacy
Compliance
- We maintain SOC 2 Type 2 accreditation. Our current report is available at discourse.org/forms/soc2.
- We maintain ISO 27001:2013 certification. Our current certificate is available at discourse.org/forms/iso27001.
System Development and Maintenance
-
All employee GitHub accounts have two-factor authentication enforced
-
Our security policy is at https://github.com/discourse/discourse/blob/master/docs/SECURITY.md
-
Our code is 100% open source and freely auditable at https://github.com/discourse/discourse
-
Developers have a complete local setup for all code so they can work and test using dummy data (no customer data is used locally)
-
We have a large set of “unit tests” that validates code changes do not break functionality, and is run after every checkin
-
We have a “smoke test” that validates our code does not prevent essential functionality in a headless web browser, and is run after every checkin
-
We continuously deploy all checkins to meta.discourse.org as soon as our automated testing passes for team and public testing
-
If code is stable on meta.discourse.org for a period of days, we deploy it to a few low-volume hosted customers; If code is stable on low-volume customers, we deploy it to a few high-volume hosted customers; If code is stable on high-volume customers, we deploy it to all hosted customers
-
Our general software methodology is documented here How do we decide what goes into each release of Discourse? - Discourse Meta
Business Continuity Planning
-
All our code is hosted by GitHub, most of it is public and therefore mirrored multiple places and recoverable in the event of disaster
-
Internal operational data is also backed up to AWS S3 and encrypted at rest
-
We have a built in disaster recovery test that pulls down a random customer backup from Amazon S3 and restores it as a functioning Discourse instance. This test executes daily
This version of CDCK’s security brief took effect July 3, 2024.