Security & Infrastructure
Here’s how we keep your data secure and available
The Forms On Fire platform provides robust and secure functionality for the rapid creation and deployment of connected, data-driven business applications, with the primary use case of replacing paper forms with mobile applications. The application architecture and failover designs leverage world-class technology to deliver a massively scalable, highly available and cost-effective software as a service offering.
Built on Microsoft Azure
Forms On Fire is hosted on Microsoft’s Azure public cloud infrastructure, which enables the ability to deliver highly scalable, available and fault-tolerant services. The application architecture is designed to leverage Azure’s strong geo-redundancy, replication and recovery options, and follows Microsoft recommended best practices and processes.
Azure meets a broad set of international and industry-specific security, privacy and compliance standards including ISO 27001, HIPAA, FedRAMP, SOC 1 and SOC 2, as well as country-specific standards like Australia IRAP, UK G-Cloud, and Singapore MTCS.
More information, including white papers and other resources, can be found at:
Utilizing industry standard tools and practices to perform software development, quality assurance, deployment and configuration is all part of the daily operations of our SaaS platform. Software and environment changes are versioned and committed to source control systems, with continuous integration tools providing automated testing and build procedures.
Application updates are deployed to a staging environment and then promoted to production using Azure’s Virtual IP address mechanism to avoid downtime. In the event of issues with the new production deployment, the environment is immediately rolled back to the prior stable version. All environmental aspects are defined via controlled configuration files, ensuring that application deployments execute on a consistent infrastructure and operating system environment.
Robust monitoring tools are employed to log, analyze and constantly measure platform performance, availability and responsiveness. Automated alerts and notifications are raised when key measures approach acceptability limits, allowing the team to respond timely and proactively to issues.
Data Replication and Backup
Data generated and stored on the platform is replicated between two physical data centers via Azure’s paired region approach. Azure geo-replication and geo-redundancy features are utilized for storage and database operations, guided by Microsoft recommended practices. Point in time backups are also automatically executed hourly for database and daily for general file storage.
System Failover and Disaster Recovery
The application architecture follows best practices to ensure failover and recovery can occur across multiple levels and scenarios. At a hosting level, the platform is deployed across a primary and secondary data center pair. These data centers are sufficiently physically distant from each other to reduce the likelihood of natural disasters, civil unrest, power outages, or physical network outages affecting both regions at once. In the event of tier failure or outright disaster, failover procedures will transition services from the primary to the secondary center.
Network and Platform Security
Server instances run behind Azure’s comprehensive firewall and load balancing solution. Inbound connections from both the Internet and remote management ports are blocked by default, with access tightly restricted to legitimate protocol and traffic only. All firewall configurations are version controlled and peer reviewed as part of the standard change management processes. For more information on Azure-specific security, refer to Microsoft’s self-assessment paper here:
Backend access to platform databases, storage accounts and server instances is restricted to qualified team members only, with all actions performed using Microsoft provided management tools across SSL secured connections.
All app, web browser and REST API interactions with the platform occur using 256 bit SSL/TLS encryption (HTTPS protocol). Users are required to log in with an email and password, and their login and access activity is recorded. API access is authenticated against a platform generated 32 character secret key token.
Passwords stored on mobile devices and platform servers are always encrypted using AES 256 bit encryption algorithms according to industry standard practices.
When a user account is terminated or deactivated, an automatic wipe of local app data is executed when/if the user next attempts to access the app.
Frequently Asked Questions
Below is a set of system and security questions commonly asked of Forms On Fire. Please note that the infrastructure and system design is subject to change and thus may result in the answers below being revised from time to time. All answers apply to our cloud services unless otherwise indicated.
Is data “encrypted at rest” (e.g. in static backups, databases, file storage) and in transit?
Yes, as of May 25 2018, all data will be encrypted when at rest. When data is transported between servers and devices it’s encrypted over HTTPS using 256 bit SSL.
Are employees only provided with access to the network and network services that they have been specifically authorized to use based on their role? What about customers?
Strictly only employees and/or contractors have network and infrastructure services access, the access level is based on their role. Clients have no network or infrastructure services access.
Are privileged and generic account access tightly controlled and reviewed on a periodic basis, at least annually?
Yes. Password management software regularly (at least annually) rotates and renews passwords.
Are shared user accounts prohibited for employees? What about customers?
Some shared accounts are employed based on access role, otherwise employees have their own dedicated accounts. Clients have no access/accounts as mentioned above.
Does your password construction require multiple strength requirements?
We require a minimum 6 characters in passwords on our basic password management level. OWASP and NIST SP 800-63-3 password policy options will be available from May 2018 for all client accounts. Clients can also implement their own choice of strength requirements by creating users & passwords through our APIs and turning off user password change functionality in the app.
Is the network boundary protected by a firewall with ingress and egress filtering?
Yes. All firewalls and load balancing facilities are provided by Microsoft’s Azure platform. Refer to Microsoft’s STAR self-assessment details found here:
Are public facing servers in a well-defined De-Militarized Zone (DMZ)?
Yes, this is inherited from Azure’s default infrastructure zoning. Refer to Microsoft’s STAR self-assessment details found here:
Is internal network segmentation used to further isolate sensitive production resources such as PCI data?
We do not store PCI data, but network segmentation is employed based on Azure’s default configurations in this respect. Refer to Microsoft’s STAR self-assessment details found here:
Is network intrusion Detection or Prevention implemented and monitored?
A broad spectrum of monitoring tools are run and supplemented by notifications and alerts provided by Azure. This includes intrusion detection and email confirmations of network access.
Are all desktops protected using regularly updated virus, worm, spyware and malicious code software?
Yes, Windows and Mac computers are utilized with auto-updating of operating systems and antivirus enabled.
Are servers protected using industry hardening practices? Are the practices documented?
Various security services are utilized to provide regular system security audits. Clients can also contact us to conduct penetration testing as desired to meet their requirements.
Is there an ongoing program for network and vulnerability scanning, e.g. port scanning?
Various services are subscribed to that conduct automated penetration tests monthly using industry security standard tools and services.
Is there active vendor patch management for all operating systems, network devices and applications?
Yes. Servers are constantly updated and patched by Microsoft automatically via their Azure service.
Are all production system errors and security events recorded and preserved?
Logs are preserved for a minimum of 1 month, with some remaining for up to 6 months, depending on severity and action required.
Are security events and log data regularly reviewed?
Yes. Logs are reviewed daily, weekly and monthly – depending on the nature of the log events.
Is there a documented privacy program in place with safeguards to ensure the protection of client confidential information?
Is there a process in place to notify clients if any privacy breach occurs?
Yes. A standard, documented process for responding to security breaches is followed. This includes notifying impacted clients within 72 hours of a confirmed breach.
Do you store, process, transmit (i.e. “handle”) Personally Identifiable Information (PII)?
In what country or countries is PII stored?
Are system logs protected from alteration and destruction?
This is provided by Azure internally. Refer to Microsoft’s STAR self-assessment details found here:
Are boundary and VLAN points of entry protected by intrusion protection and detection devices that provide alerts when under attack?
This is provided by Azure internally. Refer to Microsoft’s STAR self-assessment details found here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure
Are logs and events correlated with a tool providing warnings of an attack in progress?
Monitoring tools provide access to the necessary logging events when seeking correlation to attacks.
Is system level security based on industry standard frameworks such as ISO-27001, NIST800-53, or an equivalent framework as appropriate?
Microsoft Azure is audited annually by ISO-27001 for compliance. Our development team follows industry best practices for data and system security, including ISO 27001 recommendations. We are not currently audited or otherwise certified under such frameworks. We may formally gain a relevant certification in the future.
How is data is segregated from other clients within in the solution, including networking, front-ends, back-end storage and backups?
Every client account is logically separated from other clients, through the use of a required, persistent tenant identifier on all database records. All application code requires this tenant identifier for all operations – both read and write. An automated testing regime is also in place to protect code changes from regressions and possible cross-tenant data contamination.
The tenant identifier is “hard linked” to every user account and logically enforced through fixed “WHERE” clauses on database queries and equivalent measures for file access. A platform user is not able to change or otherwise unlink their session or account from this tenant identifier. Thus there is no logical possibility of a user having login authorization under a different tenant identifier. Even if they tried to access pages using a different tenant’s id, the system would reject the request due to the user account not being registered to the requested tenant ID.
Do you have an Incident Response Plan?
A “living document” is maintained by our developers which outlines disaster and incident response checklists, contact details and key system facilities for understanding and responding to incidents.
What level of network protection does the platform implement?
All network level security is managed by Microsoft Azure. See: https://download.microsoft.com/download/C/A/3/CA3FC5C0-ECE0-4F87-BF4B-D74064A00846/AzureNetworkSecurity_v3_Feb2015.pdf
Do you install Microsoft Antimalware for Cloud Services and Virtual Machines or another antivirus solution on VMs, and can VMs be routinely reimaged to clean out intrusions that may have gone undetected?
The option to install Antimalware is available if needed, however the default configuration is the same as Microsoft’s – which is antimalware is not installed.
Developer’s do not remotely login or otherwise install software on our Cloud Services instances aside from the standard closed loop deployments through standard Azure management tools. Thus the risk of malware installation is minimal due to the lack of any direct login access to the instances.
The servers are re-created using new, default Cloud Service instances every time a platform upgrade is deployed, which happens on average every 2 days or less.
This highly frequent re-creation of fresh instances also reduces any possible exposure time to malware in the highly unlikely event such was deployed to the servers.
Does the platform provide reports for Quality of Service (QOS) performance measurements (resource utilisation, throughput, availability etc)?
We don’t provide such metrics to clients, aside from availability and response timings as per our status page here.
Is the disaster recovery program tested at least annually?
Yes, the operations group perform recovery checks and tests annually.
What is the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) of the system?
The RTO is 4 hours, with RPO being 1 hour.
Do you provide backup and restore plans for individual clients?
All aspects are multi-tenanted, so backups are taken across the entire client base. Complete file backups are performed every 24 hours and benefit from Azure database point in time backups taken every 5 minutes.
What is the maximum time that back-ups are retained?
Database point-in-time backups are retained for 30 days and general file backups for a similar period.
What is the expected turnaround time for a data restore?
Any client restore in any non-disaster scenario must be requested and scheduled with the support team. Turn around is between 1 and 2 business days.
Can a single entity (e.g. a Form) be restored without impacting the entire platform?
If restoration of a specific record or artefact is required by a client, this can be performed online via a per request basis and is chargeable work. There is no impact on the platform or client account.
Is High Availability provided – i.e. where one server instance becomes unavailable does another become available?
Multiple server instances are run at all system tiers, including database (which is replicated). Failure of a server instance within the data center is handled by Azure’s load balancers, with the problem instance recycle and/or removed and replaced with a new instance.
Is data stored and available in another location (data center) to meet disaster recovery requirements?
Yes. All data is replicated to a second regional data center which differs by geographic location.
Is the failover process an active/active, automated switchover process?
Failure of a server instance within the primary data center is handled by Azure’s load balancers, with the problem instance recycle and/or removed and replaced with a new instance.
In the event that the entire data center were to have a critical failure, then switchover to the secondary center is a manual process, as a full assessment of the issue needs to be performed first to ensure there is no simple workarounds to keep the existing primary center presence available. If it is determined that a move to the secondary center is required, then switchover will be initiated manually to meet the target recovery objectives.