Security Infrastructure

Security & Infrastructure

Here’s how we keep your data secure and available

The Forms On Fire platform provides robust and secure functionality for the rapid creation and deployment of connected, data-driven business applications, with the primary use case of replacing paper forms with mobile applications. The application architecture and failover designs leverage world-class technology to deliver a massively scalable, highly available and cost-effective software as a service offering.

Built on Microsoft Azure

Forms On Fire is hosted on Microsoft’s Azure public cloud infrastructure, which enables the ability to deliver highly scalable, available and fault-tolerant services. The application architecture is designed to leverage Azure’s strong geo-redundancy, replication and recovery options, and follows Microsoft recommended best practices and processes.

Azure meets a broad set of international and industry-specific security, privacy and compliance standards including ISO 27001, HIPAA, FedRAMP, SOC 1 and SOC 2, as well as country-specific standards like Australia IRAP, UK G-Cloud, and Singapore MTCS.
More information, including white papers and other resources, can be found at:
https://azure.microsoft.com/en-us/support/trust-center

Operational Practices

Utilizing industry standard tools and practices to perform software development, quality assurance, deployment and configuration is all part of the daily operations of our SaaS platform. Software and environment changes are versioned and committed to source control systems, with continuous integration tools providing automated testing and build procedures.

Application updates are deployed to a staging environment and then promoted to production using Azure’s Virtual IP address mechanism to avoid downtime. In the event of issues with the new production deployment, the environment is immediately rolled back to the prior stable version. All environmental aspects are defined via controlled configuration files, ensuring that application deployments execute on a consistent infrastructure and operating system environment.

Robust monitoring tools are employed to log, analyze and constantly measure platform performance, availability and responsiveness. Automated alerts and notifications are raised when key measures approach acceptability limits, allowing the team to respond timely and proactively to issues.

Data Replication and Backup

Data generated and stored on the platform is replicated between two physical data centers via Azure’s paired region approach. Azure geo-replication and geo-redundancy features are utilized for storage and database operations, guided by Microsoft recommended practices. Point in time backups are also automatically executed hourly for database and daily for general file storage.

System Failover and Disaster Recovery

The application architecture follows best practices to ensure failover and recovery can occur across multiple levels and scenarios. At a hosting level, the platform is deployed across a primary and secondary data center pair. These data centers are sufficiently physically distant from each other to reduce the likelihood of natural disasters, civil unrest, power outages, or physical network outages affecting both regions at once. In the event of tier failure or outright disaster, failover procedures will transition services from the primary to the secondary center.

Network and Platform Security

Server instances run behind Azure’s comprehensive firewall and load balancing solution. Inbound connections from both the Internet and remote management ports are blocked by default, with access tightly restricted to legitimate protocol and traffic only. All firewall configurations are version controlled and peer reviewed as part of the standard change management processes. For more information on Azure-specific security, refer to Microsoft’s self-assessment paper here:
https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Backend access to platform databases, storage accounts and server instances is restricted to qualified team members only, with all actions performed using Microsoft provided management tools across SSL secured connections.

All app, web browser and REST API interactions with the platform occur using 256 bit SSL/TLS encryption (HTTPS protocol). Users are required to log in with an email and password, and their login and access activity is recorded. API access is authenticated against a platform generated 32 character secret key token.
Passwords stored on mobile devices and platform servers are always encrypted using AES 256 bit encryption algorithms according to industry standard practices.
When a user account is terminated or deactivated, an automatic wipe of local app data is executed when/if the user next attempts to access the app.

Frequently Asked Questions

Below is a set of system and security questions commonly asked of Forms On Fire. Please note that the infrastructure and system design is subject to change and thus may result in the answers below being revised from time to time. All answers apply to our cloud services unless otherwise indicated.

As of May 25 2018, all data will be encrypted when at rest. When data is transported between servers and devices it’s encrypted over HTTPS using 256 bit SSL.

Strictly only employees and/or contractors have network and infrastructure services access, the access level is based on their role. Clients have no network or infrastructure services access.

Yes. Password management software regularly (at least annually) rotates and renews passwords.

Some shared accounts are employed based on access role, otherwise employees have their own dedicated accounts. Clients have no access/accounts as mentioned above.

We require a minimum 6 characters in passwords on our basic password management level. OWASP and NIST SP 800-63-3 password policy options will be available from May 2018 for all client accounts. Clients can also implement their own choice of strength requirements by creating users & passwords through our APIs and turning off user password change functionality in the app.

Yes. All firewalls and load balancing facilities are provided by Microsoft’s Azure platform. Refer to Microsoft’s STAR self-assessment details found here:
https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Yes, this is inherited from Azure’s default infrastructure zoning. Refer to Microsoft’s STAR self-assessment details found here:
https://cloudsecurityalliance.org/star-registrant/microsoft-azure

We do not store PCI data, but network segmentation is employed based on Azure’s default configurations in this respect. Refer to Microsoft’s STAR self-assessment details found here:
https://cloudsecurityalliance.org/star-registrant/microsoft-azure

A broad spectrum of monitoring tools are run and supplemented by notifications and alerts provided by Azure. This includes intrusion detection and email confirmations of network access.

Yes, Windows and Mac computers are utilized with auto-updating of operating systems and antivirus enabled.

Various security services are utilized to provide regular system security audits.  Clients can also contact us to conduct penetration testing as desired to meet their requirements.

Various services are subscribed to that conduct automated penetration tests monthly using industry security standard tools and services.

Yes. Servers are constantly updated and patched by Microsoft automatically via their Azure service.

Logs are preserved for a minimum of 1 month, with some remaining for up to 6 months, depending on severity and action required.

Yes.  Logs are reviewed daily, weekly and monthly – depending on the nature of the log events.

Yes. Refer to our Privacy Policy and GDPR information page.

Yes. A standard, documented process for responding to security breaches is followed. This includes notifying impacted clients within 72 hours of a confirmed breach.

Yes. Refer to our Privacy Policy for more information on this.

Unless you are on a private server, you are hosted in the USA. Refer to our Privacy Policy for more details.

This is provided by Azure internally. Refer to Microsoft’s STAR self-assessment details found here:
https://cloudsecurityalliance.org/star-registrant/microsoft-azure

This is provided by Azure internally.  Refer to Microsoft’s STAR self-assessment details found here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Monitoring tools provide access to the necessary logging events when seeking correlation to attacks.

Microsoft Azure is audited annually by ISO-27001 for compliance. Our development team follows industry best practices for data and system security, including ISO 27001 recommendations. We are not currently audited or otherwise certified under such frameworks. We may formally gain a relevant certification in the future.

Every client account is logically separated from other clients, through the use of a required, persistent tenant identifier on all database records. All application code requires this tenant identifier for all operations – both read and write. An automated testing regime is also in place to protect code changes from regressions and possible cross-tenant data contamination.

The tenant identifier is “hard linked” to every user account and logically enforced through fixed “WHERE” clauses on database queries and equivalent measures for file access. A platform user is not able to change or otherwise unlink their session or account from this tenant identifier. Thus there is no logical possibility of a user having login authorization under a different tenant identifier. Even if they tried to access pages using a different tenant’s id, the system would reject the request due to the user account not being registered to the requested tenant ID.

A “living document” is maintained by our developers which outlines disaster and incident response checklists, contact details and key system facilities for understanding and responding to incidents.

The option to install Antimalware is available if needed, however the default configuration is the same as Microsoft’s – which is antimalware is not installed.

Developer’s do not remotely login or otherwise install software on our Cloud Services instances aside from the standard closed loop deployments through standard Azure management tools. Thus the risk of malware installation is minimal due to the lack of any direct login access to the instances.

The servers are re-created using new, default Cloud Service instances every time a platform upgrade is deployed, which happens on average every 2 days or less.

This highly frequent re-creation of fresh instances also reduces any possible exposure time to malware in the highly unlikely event such was deployed to the servers.

Performance & Disaster Recovery

We don’t provide such metrics to clients, aside from availability and response timings as per our status page here.

Yes, the operations group perform recovery checks and tests annually.

The RTO is 4 hours, with RPO being 1 hour.

All aspects are multi-tenanted, so backups are taken across the entire client base. Complete file backups are performed every 24 hours and benefit from Azure database point in time backups taken every 5 minutes.

Database point-in-time backups are retained for 30 days and general file backups for a similar period.

Any client restore in any non-disaster scenario must be requested and scheduled with the support team. Turn around is between 1 and 2 business days.

If restoration of a specific record or artefact is required by a client, this can be performed online via a per request basis and is chargeable work. There is no impact on the platform or client account.

Multiple server instances are run at all system tiers, including database (which is replicated). Failure of a server instance within the data center is handled by Azure’s load balancers, with the problem instance recycle and/or removed and replaced with a new instance.

Yes. All data is replicated to a second regional data center which differs by geographic location.

Failure of a server instance within the primary data center is handled by Azure’s load balancers, with the problem instance recycle and/or removed and replaced with a new instance.

In the event that the entire data center were to have a critical failure, then switchover to the secondary center is a manual process, as a full assessment of the issue needs to be performed first to ensure there is no simple workarounds to keep the existing primary center presence available. If it is determined that a move to the secondary center is required, then switchover will be initiated manually to meet the target recovery objectives.