Dear Flarumites,
It is with a sad and heavy heart that we announce today that Flarum.org was the subject of a cyber attack which resulted in the breach of one of our servers. Inline with our ethos of total transparency, we wanted to give you all the full details of what we know and what actions have been taken.
I’m concerned! What do I need to do?!
We understand that security breaches are a concerning thing. We absolutely appreciate that our users expect, and deserve, better. We promise that we will endeavour to secure our systems and that’s why we’re bringing together our own internal resources with continuous security reviews from external parties as part of our vision for Flarum 2.0 and beyond.
If you are concerned about the impact of this breach, and what it means for you then we recommend:
- Reset your password on any other providers that you may have re-used it on. (We also strongly advise not to re-use passwords.)
- Where possible, enable Multifactor Authentication, or MFA.
And for those that want our detailed analysis, please see below:
What actually happened?
On the 3rd October, 2023 @luceos noted a number of files on our server which are not part of our deployment for Flarum, and immediately notified the ops team (Namely, myself and @katos) through our relevant Discord channel to launch an investigation.
Over the next few days we immediately took action to limit the impact of any breach and to pull files for auditing and forensic analysis. IT was identified that a total of 12 files were uploaded into the /public/ directory of flarum.org
At the same time, we also identified an old backup of Discuss’ database had not been cleared down from the server, this backup dated back to December 2021.
The decision was taken to copy files off of the server for analysis, and to run a git-clean and redeploy in order to obliterate any malicious files from our servers (as we use a CI/CD pipeline, this action was quick and easy).
Out of a further abundance of caution, we also took the decision to fully remove ALL SSH keys from the server and to re-generate a key for Luceos who would (for now) be the sole manager of the server whilst investigations were ongoing.
On Friday 06/10/2023 (three days later), we were able to confirm that:
- Of the files uploaded, we are confident that unfortunately a webshell was uploaded which allowed the attackers to upload further scripts and expand their arsenal against the server.
- We identified an email spam script (leveraging PHP’s “LeafMailer” library) which was used to send spam mail from our server.
- We identified a php shell script allowing file management on the server. We have NO REASON to believe that files served from the server were edited, as all files retained their original content, however we are unable to verify if data was exfiltrated from the server.
- We also identified a number of HTML files which contained malicious URL redirects designed to steal credentials - We cannot see evidence of these being used, and suspect that they may have been part of a wider email campaign for phishing / credential hijacking.
- Whilst we do not wish to spread malicious code, we can also confirm that we identified three sources of malicious code that was used in this attack. All three instances of this came from old shell scripts found online in Github gists.
- Of the malicious files, all except two triggered detections through Virus Total or Yara.
With the evidence in our logs, and with the files that we have identified (and their subsequent analysis), whilst we cannot confirm this; we are confident that with the PHP mail script and the html files with phishing sites, the intention of the compromise was to use our systems to establish a ground for spamming phishing links to other users. We do not believe that Flarum or it’s users were the endgame of the attack.
This leads us nicely to what we did….
What we did to rectify the issue
As already detailed above, we immediately set up a task force of individuals at Flarum who were responsible for owning the challenge of identifying the breach, and seeing through the remediation and recovery.
Below is a breakdown of what we did:
- Took a copy of the suspect files that we identified on the server.
- Immediately reviewed (and validated) access logs for SSH.
- Ran a git clean to restore the site to a state of as-deployed-from-CI/CD
- Investigated file contents and ran analysis of contents to identify the scope of breach.
- As we learned that the shell ran as our web user, and had potential access to the SQL backup file, we took the decision to rebuild our server from scratch. This went as follows:
- An ENTIRELY NEW server was created (copying NO files from the previous).
Server was configured for:
Ubuntu latest LTS version.
No root access allowed.
Cloudflare Zero-Trust SSH tunnel access only.
Laravel Forge login.
Dedicated AV scanner in place.
Crowdsec security installed to monitor activity on the box.
A review was taken and we decided that:
Access to the server should be restricted to Devops/Devsec only.
All access will have to pass through Cloudflare ZTNA.
A review of the code would be made to ensure that we are running up-to-date libraries and functions where possible.
A review of our PHP functions would be taken to ensure that any redundant/obsolete or excessive functions would be removed.
We would more regularly re-deploy our server through our CI/CD stack to ensure that only those files which we allow through our repository exist on the server.
We also took the decision to step up our security efforts by creating a dedicated security team, both for our code (already in place) but also for our infrastructure, where this was not previously clearly defined.
We then, naturally, informed you guys. As part of this:
We are advising that ALL USERS reset their password (we're forcing it). This is due to the database having been available on the server, whilst we encrypt ALL of our passwords with a strong hash, we did not wish to risk any potential further impact to our users.
We developed a framework for transparent communications of any issues going forwards.
No compromise of a system is ever easy, and we very much know that we have learned lessons the hard way.
What we learnt?
We unfortunately identified a number of shortfalls in our processes, and we know that you absolutely deserve better. That’s why we’re announcing our difficult lessons learned.
- We will be conducting regular security reviews of our full stack, from code to infrastructure to ensure that we maintain secure access at all times.
- We will be regularly reviewing all configurations and ensuring that we keep up-to-date with the latest best practises for security and development.
- Starting from today, we already have forced a reset of all user passwords to ensure that we do not risk user compromise.
What we are doing to fix this going forwards
As outlined, we will be moving our stack to a new server which has security at it’s forefront. All access will now be governed through a ZTNA pipeline provided by Cloudflare, and access will be restricted to these tunnels. This significantly reduces the attack surface and ensures that all access is explicitly defined.
We will also be more granularly segregating our web stack to reduce the attack surface between our offerings - From flarum.org, Discuss.flarum.org and Next.flarum.org.
Alongside this, we are also working to introduce a new Cyber Security Team internally who will be responsible for the maintenance and security of the systems and will handle any updates, deployments and changes to our infrastructure from here. This will ensure that our systems are vetted prior to any changes and that our stack is regularly assessed for threats.
We will also be deploying next generation EDR to not only detect, but automatically alert and remediate on threats as they are discovered on our systems.
We will also be drafting, and distributing, dedicated cyber playbooks for should an incident such as this happen again. The unfortunate incident that we have dealt with in this instance highlighted a need for a process to follow and as such we lost valuable time and information which may have assisted with remediation and recovery as well as transparent reporting.
If you have any further questions or concerns, we meant what we said about transparency. Please do reach out to us below or by DM and we will be happy to answer what we can.
We thank you for your continued support and patience, and we look forward to continuing to serve the greatest forum content.
`- The Flarum Staff Team.