Start designing for application portability – now


Take a closer look at what app portability means – and why you should consider it now. (Source: iStock, courtesy of DataBank)









































































































































































































In this edition of Voices of the Industry, Rob Lerner, VP of Sales and Cloud Strategies at DataBank, emphasizes the concept of application portability and shows how it can help businesses generate a win-win situation. in terms of costs, performance and geography.

Rob Lerner, VP of Sales and Cloud Strategy, DataBank

As businesses think about designing and hosting applications, workloads, and other assets, many already understand the benefits of options like hybrid hosting models or environments designed to support agile development methods. Yet we now believe the industry should focus on a new paradigm first and foremost: design for portability.

As the name suggests, portable application design gives businesses the ability to move applications and data from one type of compute environment (cloud, hybrid cloud, on-premises, or colocation) to another with a minimum disturbance. This architecture strategy provides maximum flexibility and best positions businesses to rapidly change services to meet changing business needs in the future.

Take the case of a company developing a brand new application, such as a SaaS solution. At first, it makes sense for this company to host the application in the cloud, especially if the company does not have the resources or budget to support on-premise hosting and management. However, as the app becomes more popular, it begins to collect and store a lot more data, which can lead to surprisingly high exit fees and other transaction fees that are often a shock. Worse yet, cloud native products and code are not easy to transfer to another cloud vendor’s technology, contributing to vendor lock-in and loss of technological flexibility.

Adjust the metrics that matter most

The portable design overcomes these issues and helps companies strike a balance between three critical measurement “dials”: cost, performance, and geography. For example, if businesses are looking to cut costs, they can migrate data-intensive workloads to a less expensive vendor or alternative hosting model. If they want to increase performance, they can move a performance-demanding workload to a more suitable cloud provider in real time. If they want to host applications in a particular region, in order to improve network latency or reach consumers at the edge, they can use cloud partners in the ideal location (s).

Best of all, wearable app design allows businesses to optimize results across all three watch faces. It doesn’t force IT managers to prioritize, but rather provides full flexibility to help them achieve the right balance for their business on an ongoing basis.

However, it is important that companies start thinking about designing portable applications as early as possible. If they rely too much on native tools from a specific cloud provider, they inadvertently end up in a corner because it can simply be too expensive or difficult to redesign a portable architecture later. (We’ll describe this in more detail below.)

Overview of portable applications

The image below illustrates a portable workload infrastructure, which balances autoscaling with the savings of better-used resources by sharing the workloads across the infrastructure:

portability

Within this architecture, if your business experiences periodic peaks in activity, bursting workloads to the public cloud during those times provides the greatest profitability. When the workload is relatively consistent, a private cloud or colocation data center will most likely reduce costs in the long run.

Cloud native technologies reduce portability

To demonstrate the importance of portability, consider the case of an IoT company that deploys thousands of thermostats, motion detectors, cameras, and microphones for customers who manage multi-building properties such as hotels, casinos and college campuses. Knowing that services may need to scale up quickly during peak times for customers, like a big event in a hotel or casino or at the start of a college semester, the IoT company decided to expand its platform. to manage and monitor devices using technologies that were native to its hyperscale public cloud platform.

The platform’s native approach seemed logical at first. After all, it was easy to quickly deploy applications in the cloud infrastructure, but a year later the hyperscale environment was only needed 20% of the time. For the remaining 80%, the IoT company essentially paid an elasticity tax on compute resources that were idle during times of low workload. In addition, the cost of hosting the environment increased rapidly as the infrastructure began to accumulate large data sets.

This is a case where, once the initial workload stabilized, it would have made sense to move the basic infrastructure to a virtual private cloud where compute costs are lower. The IoT company could also set up ramp interconnects to a hyperscale cloud platform to expand compute capacity cost-effectively when workload spikes occur. However, this approach relies on developing an initial environment using application container management tools, such as Kubernetes, and provisioning tools, such as Ansible and Terraform, which are open source and can run in n any cloud data center, on-premises or colocation. .

In this case, the IoT company relied on native technologies from the cloud platform provider. Porting such an environment to another cloud provider or a colocation or on-premises data center would be costly. If the company had avoided the native approach, it would have had the ability to easily port its applications to any cloud environment and take advantage of a hybrid approach that allows it to migrate workloads based on demand. request from users.

The initial sacrifice generates long-term savings

The cloud is a great place to give birth to an application. It’s the fastest way to get the app available to end users, and if workload demand skyrockets, you can easily scale compute resources to ensure a positive user experience.

However, as application workloads begin to mature by stabilizing and accumulating large data sets and transactions, cloud costs can quickly increase. This is when it often makes sense to move some of the infrastructure to a colocation data center or private cloud environment.

The key is to design for that portability now. When you evaluate IT infrastructure partners, make sure they have them now or in their product roadmap:

  • Industry standard APIs using open source technologies for mobility between cloud, on-premises and colocation data center environments
  • Containerization so that your virtualized infrastructure shares the same operating system
  • Ramps to the cloud to facilitate the portability of your application
  • Security to protect your data as it moves to and from the cloud

By avoiding the temptation to develop applications with technologies native to a particular cloud platform, you can sacrifice ease of deployment up front, but you can generate long-term savings for your business while still providing your business with the ease of deployment. flexibility it needs, now and in the future.

Rob Lerner is a 22-year industry veteran who has held leadership roles in solutions engineering and sales, always having a customer-centric mindset. He is vice president of cloud sales and strategies at DataBank.


Comments are closed.