Philosophy

Mews is a cloud-native, serverless, multi-tenant SaaS platform. This has many implications across all aspects of how Mews operates and how it fulfils various requirements and compliance. It’s important to look at Mews through the prism of this philosophy while reading other sections of this documentation, because it might be different from more traditional systems, and some requirements or questions are not applicable when considering the Mews platform.

Cloud-native

From day one, Mews was built for the cloud. The system architecture was designed for cloud deployment and it utilizes all of the benefits this brings. Mews is not a system that was designed to run on-premises and later ported or adjusted for cloud deployment, which means that some processes or procedures work differently or are not happening at all.

Serverless

There are multiple ways to operate within a cloud environment. On one end of the spectrum, you could be using only the low-level cloud services like virtual machines and handle everything else on your own. The advantage of this is that all cloud providers offer such primitive functionality and therefore it’s rather straightforward to switch them – although you need to have expertise on how to configure the servers, databases etc. and continually maintain it on your own.

On the other end of the spectrum, you could go beyond low-level functionality and use the cloud as a service, e.g. computing or storage services. That way, the cloud provider takes care of the configuration and maintenance – the disadvantage is that you are becoming more locked-in to your specific cloud provider.

We are a proud partner of Microsoft and use Microsoft Azure to its fullest potential. Therefore, we’re on the serverless side of the spectrum. We use services like Azure App Service for web application hosting, and Azure SQL Database and Azure Storage as storage services. Therefore, we don’t operate any virtual machines, web servers or database servers on our own – that is the responsibility of our cloud provider, including the compliance and security of such services.

These services have their own SLAs defined by Azure. We built our solution on top of that in a way that combines their services and SLAs together with our system, to guarantee our SLAs. We use the same technique to guarantee our compliance, security, disaster recovery and other aspects covered in more detail in further sections.

Multi-tenant

There is a single production “installation” of Mews platform that all of our clients use. That means our clients are always running on the latest version of the platform, with the same features and functionality available to anyone else on the platform (depending on subscription level). From a data perspective, data is not segregated, the storage is shared.

From a security perspective, it is actually very similar to a single-tenant system. There, we’d have to ensure that users with different privilege levels can access only the data they are granted access to. On multi-tenant systems, the tenant can be understood as another “layer” of privileges. Having a multi-tenant solution allows us to effectively implement above-enterprise or above-chain scenarios and deliver great guest experience, especially in the guest portal.

SaaS

The only thing you need to use Mews platform is the internet and a web browser. Everything else, we take care of. We handle all the aspects that are covered in this documentation, and we strive to do them as well as possible while continually improving. It is our responsibility to ensure the system is fast, always available, has backups, is secure, complies with all legislations, is always up-to-date, accessible for everybody all over the world and usable for a wide range of users.

Customers are only responsible for keeping external records if that is required by law. In France, customers have obligation to keep the tax records on a secure external physical medium for the legally required period.

Infrastructure

We use Microsoft Azure as a cloud provider, and we utilize the following services:

  • Azure SQL Database for storage of relational data
  • Azure Storage for storage of binary data and system assets
  • Azure Cosmos DB for storage of non-relational data
  • Azure Cache for Redis (remote dictionary server) as caching storage
  • Azure App Service for application hosting
  • Azure DNS for domain management
  • Azure CDN as a content delivery network for images and other assets
  • Azure Traffic Manager for DNS-based load balancing
  • Azure Application Gateway for routing
  • Azure Automation for process automation
  • Azure Application Insights for telemetry
  • Azure Cognitive Services for AI services

Some of these services are global with geo-replication and high-availability built in by Azure; some of the services are bound to a single region. We have two, fully-operational regions: the primary region in West Europe (Netherlands), and the secondary region in North Europe (Ireland). Our primary database is a high-availability cluster within the primary data center, with replicas in the secondary region. We have App Services, SQL Databases, Redis caches and Cosmos DB in both data centers, and the rest of the services are shared.

Other services

Besides the above, we use other third party services for various purposes:

  • Rapid7 insightOps for logging
  • Sentry for error reporting
  • NewRelic for performance monitoring
  • PagerDuty for incident management
  • Firebase for push notifications
  • Namecheap for domain registration.
  • PCI Proxy as a card tokenization service
  • Twilio as a text message provider
  • Twilio Sendgrid as an emailing service provider
  • Google Analytics for analyses of user behavior
  • Hotjar for analyses of user behavior
  • GitHub as a source control tool
  • Azure DevOps as a continuous integration pipeline
  • Zapier for system integration
  • Browserstack as a testing tool
  • Statuspage for public system status information

Tech stack

When it comes to programming languages, we’re using C# and .NET on the backend, JavaScript/TypeScript and React on the frontend, and Flutter and Kotlin in mobile applications. On top of that, we utilize many open source libraries, frameworks and other programming languages and technologies. The complete list is evolving all the time, and you can see a full, up-to-date tech stack with all our services, infrastructure, tools, languages etc. at our StackShare.

Environments

The Mews platform runs in multiple instances called environments; some of them are public, some of them are private:

Disaster Recovery

As a cloud-native system, our disaster recovery strategy revolves around data backups and the capability to restore them in case of an incident. All other services are “stateless” which means that in case of disaster, we are able to restore them without any loss of information. We heavily rely on features that Microsoft Azure offers in this area, plus we have our own levels of backups built on top of standard Azure features.

Azure SQL database

We use the premium tier of Azure SQL database with a replica in the secondary geographical region. This setup already has several backup layers and mechanisms out of the box, described in full detail in the Azure SQL database documentation. On top of that, we have our own backup processes. All of the options, both built-in and ours, are described below:

  • Within a data center, the database service runs as a high-availability cluster of two identical replicas of the database with near real-time data latency (low milliseconds). In case of disaster to the primary database, the service immediately fallbacks to the secondary database. Alternatively, we are able to trigger this fallback manually.
  • The database service offers point-in-time restore which enables us to restore a complete database to a particular point in time up to 35 days back.
  • The database cluster in the primary region is geo-replicated to the secondary high-availability cluster in the secondary region. In case of disaster in the primary region, we are able to perform failover to the secondary region and promote the secondary replica to master. We can do that fast and reliably using auto-failover groups.
  • We perform daily snapshots of the primary database using the point-in-time restore capability to another backup server. The backup server holds two fully restored copies, at most 24 and 48 hours old, ready for immediate usage in case of disaster affecting both the primary and secondary database. Alternatively, these snapshots may be used in case of partial data corruption to restore the data immediately.

Azure Storage

As a store for binary data, we use Azure Storage configured to use geo-redundant storage capabilities. The data is automatically replicated three times within the primary region and three times in the secondary region. The storage account also uses a soft-delete feature which prevents application specific issues and allows us to recover potentially corrupted data. Similarly to SQL database, we perform daily incremental backups of all data in the storage into backup storage that is ready for immediate usage in case of disaster affecting the primary storage.

Cosmos DB

We have Cosmos DB configured to be replicated into multiple regions. Cosmos DB transparently replicates the data to all regions associated with our account, and supports automatic failover in case of regional outage. Currently, we store only non-business-critical data to Cosmos DB (e.g. logs) and therefore we don’t have any additional layer of backups built on top of offered features of the service.

Deployments

Our philosophy when it comes to deployments is to deploy as often as possible and the smaller the deployed change-set, the better. The main reason is that this helps us to deliver finished features and fixes to our clients as soon as possible so that they can benefit from them immediately. The secondary reason is to minimize problems during deployments and simplify investigation and rollback in case of any problems. All of our deployments cause no downtime for the end-user.

Deployment schedules

Our platform is not a single application that we would deploy en-bloc, but rather an assembly of systems and applications that work and communicate together and that have their own deployment schedules. There are three main categories of applications with respective deployment schedules:

  • Backend platform (server) is deployed at least once every weekday. On top of that, if necessary, ad-hoc deployments can be done for various purposes, e.g. hot fixes. The standard scenario is that all changes (features, bug fixes, improvements) are being continuously deployed to the development environment. Once a day, we automatically take a snapshot of the development environment version and deploy it to the demo environment (this is called feature-freeze). The next day, if there were no problems on the demo site, that version is deployed to the production environment. All deployments happen gradually on all instances and regions. During the process, we monitor the system and in case of any issues, we’re able to rollback the process.
  • Web applications (e.g. Commander, Distributor or Navigator) are deployed independently whenever a change is finalized in the application. That means whenever a feature is implemented or bug is fixed and passes quality assurance, it is immediately deployed both to demo and production. This is true continuous delivery which means there might be 50 or 0 deployments in a day, depending on how many changes are finalized on that day. Again, we monitor the health of the applications and we’re able to rollback any process if necessary.
  • Mobile applications are deployed irregularly, due to verification processes in application stores. Once in a while, when we determine that the set of finished changes in the development version of the application is reasonably big, or when necessary for other reasons, we release a new version of the application. It's then published to the respective application store, where it goes through the verification process. After some time (hours or days), the new version reaches end-user devices.

We reserve the option of scheduled downtime necessary for system changes, although our goal is to never use this option. So far, we had to use it only once in 2013 when we were migrating our cloud provider from AppHarbor to Microsoft Azure. Since then, we have had no scheduled downtime.

Release process

It's important to distinguish deployment and release. Deployment is the moment when the change reaches the production. However, the change does not necessarily need to be available to all clients. The moment when the change is available to a client is called a release. Smaller changes, bug fixes, improvements or other non-critical things are released to all our clients as soon as they are deployed. However, for bigger or more critical changes, we stick to the following 4-step release process:

  1. 1. Internal alpha: The change is released only to Mews employees. We use Mews internally as well, therefore we are the “canaries” who test the change.
  2. 2. Private beta: The change is released to a selected subset of clients who form early-adopter groups. If the change is particularly important for someone who was involved in the product discovery and delivery process, they might be included in private beta as well. For this step and also for internal alpha, we use LaunchDarkly to manage the set of impacted clients.
  3. 3. Public beta: The change is released to anybody who opts-in to the change. Usually, we introduce an option in settings that allows anybody to opt into the change.
  4. 4. General availability: The change is released to all our clients.

Incidents

For all types of incidents, including security incidents, bugs, investigations or monitoring alerts, we follow strict resolution process. The process is documented in our internal knowledge base and all engineers and 24/7 support agents are familiar with that. Internally we use YouTrack as the main ticketing tool and single source of truth. Each ticket, regardless of severity, has a team assigned and is immediately communicated to the team using internal Slack channel.

For critical incidents, there is additional escalation flow and process:

  • The incident is communicated in company-wide incident channel.
  • The incident is escalated directly to team lead via PagerDuty.
  • If not acknowledged by the team lead within 15 minutes, it is escalated to division lead.
  • If not acknowledged within 30 minutes, it is escalated to CTO.
  • Incident coordinator is elected.
  • The coordinator creates communication channel for that particular incident.
  • The coordinator periodically updates our StatusPage.
  • The coordinator ensures that all relevant people are involved in resolution of the incident.
  • The coordinator communicates progress of the incident resolution to all stakeholders.
  • After resolution, postmortem task is created, the team conducts root cause analysis, designs solutions and publishes the postmortem on the StatusPage.
  • The solutions stemming from the postmortem are implemented within period defined by our internal SLO.

Security

The Mews platform works with very sensitive customer data, therefore security and data privacy are non-negotiable elements of the system. Our general approach in this area is that nothing should rely on people or their knowledge. All our security measures and internal processes are designed in that way; for example, while our developers are regularly trained on best secure coding practices, we do not solely rely on this for security. Our processes and frameworks are designed to prohibit the making of security bugs, or at the very least make it extremely difficult for a developer to introduce security issues into our system if it’s not technically possible to fully prohibit it. This is reflected in our security issue resolution process, which is described later. Our security strategy is governed by two main principles:

  1. 1. Minimizing the attack surface, reducing its scope and complexity.
  2. 2. Continuous penetration testing of the attack surface, with extensive and thorough resolution of any findings.

Besides these proactive measures, we are very often going through audits, certifications, due diligence processes and pen tests by 3rd party companies, either appointed by us (e.g. PCI-DSS, ISO) or by our prospective clients.

Minimizing the attack surface

The best way to avoid any security issues is to completely eliminate the possibility of making them in the first place. This aligns with our serverless philosophy: we are not in control of hardware, operating systems, web servers or database servers. We are not able to misconfigure of any of these systems, and we are not able to forget to apply security patches etc. – this is the responsibility of Azure, who have big security teams. We use a very limited configuration of the Azure services, for which there are options to turn on some additional security features. To ensure we don't miss any of these, we use Azure Security Advisor, which notifies us about all such options, for example when Azure introduces any new features that could harden security of our systems. Thanks to all the above, our attack surface (from the system perspective) effectively gets reduced to the application code that we develop. For more information about Azure security capabilities, please refer to Azure's security fundamentals documentation.

Continuous penetration testing

As already demonstrated, our primary focus is on application-level security. In order to ensure that our system is secure, we continuously undergo penetration testing by cobalt.io. At any given point of time, a part of our system or a product is being pen tested and we make sure that the whole surface is covered by tests in a continuous fashion.

There are multiple approaches on how to address security vulnerabilities. We take pride in our approach and address every security issue in a post-mortem manner, meaning that we perform detailed root-cause analysis and then solve not only the individual instance but all similar instances in all of our products. On top of this, we put measures in place that prevent such issues from recurring in the future. As an example, if a problem is found in one of our APIs, we update our API framework in a way that it eliminates the issue from all of our APIs. Or we implement a static code-analyzer that can check for the issue in our codebase automatically, as well as new code that we produce. So even though a single product is being tested at a time, we apply our findings to all of them. For more information, check the Incidents section.

Technical security measures

From the technical perspective, there is lot of things that we do to ensure security not only of the platform itself, but also of our clients. Here are some of them:

  • Data is encrypted in transit, we enforce at least TLS 1.2 and achieveA+ rating in Qualys SSL Labs test.
  • Data in all types of storages we use are encrypted at rest.
  • We use multiple levels of internal system logs and provide audit log to our clients inside the platform.
  • We perform regular ASV scans, internal and external vulnerability scans.
  • All our devices are protected by anti-virus/anti-malware software and are centrally managed by MDM solutions.
  • We enforce strong password policy, use 1Password and enforce MFA in all internal systems that offer this functionality.
  • We use Azure Active Directory and SSO for all internal systems that offer this functionality.
  • We follow the principle of least privilege for all our internal systems.
  • Our platform supports MFA and enforces strong passwords of our users.
  • User passwords are hashed using bcrypt and never stored in plaintext.
  • All sensitive user keys and tokens that have to be used for 3rd party authentication (like FTP passwords) are encrypted in Azure SQL Database.
  • All sensitive Mews keys and tokens are encrypted in configuration.

People security measures

  • We perform background checks for all candidates in sensitive roles we are about to hire. The extent and thoroughness of these checks depends on role and seniority of the candidate.
  • Contracts with all our employees contain confidentiality clauses and employees are obliged to follow internal rules on personal data processing.
  • All employees go through new-hire orientation which contains mandatory security and data privacy training.
  • All employees use 1Password and they generate their master password using diceware.
  • All members of technical department (developers, data analysts, quality assurance, IT) go through mandatory security training at least annually.

Payment card data

Great example of reducing attack surface is how we handle sensitive payment card data. Mews uses PCI Proxy as a card tokenization provider. Sensitive card data like number or CVV never even reach our infrastructure. As an example, let's consider a simple flow of receiving card details from third party (e.g. booking channel) and then charging that card:

  1. 1. When third party needs to send card details to us, they do not route the request to us directly, as would be usual, they route it to PCI Proxy.
  2. 2. PCI Proxy receives the request, detects card details, stores it and replaces it with tokens that are no longer sensitive. This part is called tokenization.
  3. 3. PCI Proxy forwards the request, now containing tokens, to us.
  4. 4. We store the token to our database.
  5. 5. In order to charge the card, we create a request for payment gateway. However instead of directly using the card data (which we don't have), we use tokens. And instead of sending the request directly to payment gateway, we route it to PCI Proxy.
  6. 6. PCI Proxy receives the request from us, detects the tokens there and replaces them with the sensitive card details. This part is called detokenization.
  7. 7. PCI Proxy forwards the request, now containing sensitive data, to the payment gateway.

Many types of attacks are rendered useless, due to the fact that our data storages do not contain the sensitive data.

Data Privacy

Mews platform processes personal and other types of sensitive data, however we're not a standard processor-only SaaS service, so in order to understand all the data flows, it's important to distinguish the two roles Mews is in:

  • Data processor: In the relationship between us and our clients (hotels), we are a data processor of all their data that enter the platform, including personal data of their customers. This is part of the service that we provide to our clients.
  • Data controller: In the relationship between us and our users (individuals), we are a data controller of their personal data - we provide a "travel wallet" service and other applications to our users.

These two roles and operational modes of Mews are strictly distinct and the data never mix. And since we are a multi-tenant solution, a single physical person can have their personal data in N+1 copies in Mews platform. If the person had interacted with N of our clients (e.g. had reservation in N different hotels), then N customer accounts are stored and Mews is the data processor there. The "+1" represents another copy of the personal data that is stored in case the person signed-up to Mews as a user in order to use the "travel wallet" service. For this single copy, Mews is the data controller. We don't have any joint-controllership arrangement.

All kinds of data are stored in our Azure data storages according to the Infrastructure section of this documentation. We perform no archiving of the data and backups are held only for limited period of time.

Data of our clients

The data enters the system either manually by employee of our client or are provided by their customers both directly (by sharing from travel wallet to the client) and indirectly e.g. via booking channels and our APIs. The data consists of personal details of customers of our clients who are mostly travelers from all over the world.

Our clients are able to record various data points about their customers like: first name, last name, second last name, date of birth, nationality, identity documents (passport, ID card, drivers license, visa), addresses, e-mail address, phone number etc. When it comes to payment card details, those are collected as well, however not directly accessible to our client nor to us. For more details, please refer to Security section of this documentation.

Processing

We process personal data in accordance with the data processing addendum that we sign with our clients. We are able to access the data when necessary to provide our service (e.g. investigating bugs or helping our clients in other ways based on their requests). We might use the data for internal statistical and analytical purposes, however they are always anonymized and we follow the contractual obligations. Also please refer to the Subprocessors section that lists 3rd party companies that act as subprocessors of the client data, or some subset of the client data.

Retention

We store personal data for as long as necessary, given the purposes for which they were provided or collected. Since we are processor when it comes to personal data that our clients (the controllers) collect, we are subject to their instructions on how to handle the data. It is responsibility of the client to ensure that the retention periods applicable to personal data are legally compliant. To allow our clients to manage the retention periods we give our clients options to:

  • Manually clear whole customer profile and personal data stored there. This impacts all the data points listed above and hard-delete of all the data is performed.
  • Set up automatic cleaning of customer profiles after specified period without usage. In this case, the system performs what would otherwise have to be done manually using the first option.
  • For payment data, our clients can easily set the period, in days, after which the customer's payment card information will be automatically cleared. This is an automated process that when set, clears all card data that is attached to the guest profile. Clearing means that Mews does not retain the card token, and PCI Proxy will not have information about that card. When the process is set-up, the system clears every token from the Mews database and card from PCI Proxy is cleared.

When we hard-delete data from one of the Azure data storages we use, Microsoft follows strict standards for overwriting storage resources before their reuse, as well as the physical destruction of decommissioned hardware.

Data requests

We provide means for our clients to fulfill data requests and data deletion requests coming from their customers. We also provide the user portal with messaging functionality that allows our clients to communicate with their customers easily. In case there is a request to our DPO (dpo@mews.com) that is intended for our clients and not for us, we forward such request to proper recipient.

Data of our users

The data enters the system only manually when the user populates or updates their profile. Or during sign-up. In order to provide the best experience when e.g. checking into new hotel, our users are able to record any data point that any hotel might need for check-in process. And it is up to the users to decide if they want to share their data with particular client of ours and to which extent.

On top of these shareable data points, we record usernames, passwords (hashed), means of 2FA authentication and other details necessary for frictionless usage of our platform.

Retention

Due to nature of the product, whose main function is to serve as a personal data wallet that can be used any time in the future to share the data with our clients, the data is stored for as long as the user has an account with us.

Data requests

The user portal provides the users with all of their personal data that we store and a possibility to delete their profile. Other option is to contact our DPO directly at dpo@mews.com.

Subprocessors

To support the delivery of our services, Mews engages and uses service providers that have access to certain user data. We select these third-party subprocessors and third-party processors very carefully, for third-party subprocessors, we require at least SOC 2, PCI-DSS or another industry equivalent audit/certification.

The following overview provides important information about the identity, location and role of each subprocessor

Third-party subprocessors

A third-party subprocessor is a service that we, as a data processor, utilize to deliver services to our clients (hotels) who are data controllers. Here is the list of third-party subprocessors:

  • Microsoft Corporation - US - Infrastructure, data storage, other services. Data resides in the European Union.
  • Google Ireland, Ltd. - IE - Push notifications, other services. Data resides in the United States.
  • SFDC Ireland, Ltd. - IE - Support desk, community portal. Data resides in the European Union.
  • Datatrans AG - CH - Credit card tokenization. Data resides in Switzerland.
  • Twilio, Inc. - US - Mailing and text messaging. Data resides in the United States.
  • Learnupon, Ltd. - IE - Learning management system. Data resides in the European Union.
  • Aircall SAS - FR - Cloud-based integrated business phone system. Data resides in the United States.
  • OKTA, Inc. - US - Cloud-based identity and access management software. Data resides in the European Union.
  • Gooddata Ireland Limited - IE - Cloud-based analytics and business intelligence platform. Data resides in the European Union.
  • Cloudflare, Inc. - US - Content delivery network. Data resides all around the world. Traffic will be automatically routed to the nearest data center.

Additional third-party subprocessors

The following list includes the service providers (certified partners) who are engaged by Mews to provide professional consultation and deployment services on behalf of Mews to specific clients (hotels) who purchase these kinds of services from Mews. It is important to note that not all service providers listed herein are utilized by all Mews clients. Here is the list of the additional third-party subprocessors:

  • Unisono Hospitality Management GmbH – Germany - Professional consultation and deployment services.
  • CMC Hospitality Software Ltd – United Kingdom - Professional consultation and deployment services.
  • Actrois DJ Conseils – France - Professional consultation and deployment services.
  • Swiss Urban & Mountain Hospitality AG - Switzerland - Professional consultation and deployment services.

Third-party processors

A third-party processor is a service that we, as a data controller, utilize to serve our internal needs. You can find a list of all third-party processors that we use at our StackShare.

Affiliates

Mews group consists of the following affiliates:

  • Mews Systems B.V. – NL
  • Mews Systems, s.r.o. – CZ
  • Mews Systems, Ltd. – UK
  • Mews Systems Sarl – FR
  • Mews Systems GmbH – DE
  • Mews Systems Iberica S.L. – ES
  • Mews Systems, S.R.L. – IT
  • Mews Systems, Pty Ltd. – AU
  • Mews Systems, Inc. – US
  • Planet Winner, BVBA – BE
  • PMS Winner, AB – SE
  • Databasics Hospitality System, Ltd. – UK
  • Bizzon Limited – UK
  • Bizzon d.o.o. – HR
  • Cenium Scandinavia AS – NO
  • Cenium North America Inc – US
  • Mingus Software Inc – CA

Certifications

Our approach to certifications is to judge them on a case-by-case basis in an on-demand manner. We are not proactive in this area, because even though some certifications can be helpful in learning to improve certain processes, and can provide assurance that what you do is considered best practice, we also see that some certifications have a hard time keeping up with new technologies and modern software development practices. Therefore, we only undertake the certifications that make sense to us or that are an absolute necessity for us. Currently, we have the following certifications: