Following many blogs, I ran into a real test of photon and docker. After many labs to find the answer to the question <<could you make a real magic: transform my application into a cloud native>>, I realize that something should change to developers approach to this new developing methods.The changes from developer to devops, is a great improvement from a traditional approach, but a new architecture method must be learn before trying to "dockerize" your application.
In the past application tiering was the winning model for developers which want to work in a group-model developing: architect make class diagram and ER diagram, db dev transforms ER into db schema with many relations, developers works to class developing and class/libs implementation, designers work to ui, testers.
|Data Logic (or data tier)|
|Presentation Logic (or User Interface / Web Services )|
In the old ecosystem around the application, developing applications implies developers and sysadmin working together in many steps from developments, to production. Sysadmins are still worry about this (me too).
From vSysadmin perspective a solution about this architecture could be implementing 3 or more VM's with HA.
A sort of evolution was made by the introduction of synchronous and asynchronous call from many application. In this scenario every macro-pieces of application could be called services, and by the adoption of many messaging protocols, a big application could be realized, reused and modified with less impact than traditional application. This approach takes the name of Service Oriented Application, and a Director/orchestrator is a mandatory element which could rule and check many workflows inside this big and distributed application.
|Service A||Service B||Orchestrator|
|Presentation /UI||Presentation /UI||Message broker|
|Business Logic||Business Logic||Business utilies (DNS, Monitoring)|
|Middle Logic||Middle Logic||Middle utilities|
|Data Logic||Data Logic||Data check and transaction tracker|
|Client||WS polling||Events and Timers|
|Server||Business/data||Objects, Fields, Records|
Form vSysadmin perspective implementing this system could be the same as views in the Tiering model, but the wide distribution could introduce a VMs implementation across multiple vDatacenter.
Mobile - The cloud era
Mobile development brings a little revolution for developers: the old ui, web 2.0 style interface is implemented in a mobile phone. For designers the old battle to realize a cross browser compatibility is transformed in a big world war due to application mobile compatibility.
But the real improvement is the introduction of microservices: small and thin application which could interact with databases and bigdata store in many native languages such json and using a common, old protocol, called html.
This development assume that data logic must be corrected separated from business logic and ws must fits to ui or mobile ui. Big applications could be realized simply using the same html protocol to link this little software pieces called microservices.
For sysadmin, vSysadmin and Cloud manager's perspective, the application unavailability could cause that your app could be offline: in this case your mobile phone could be useful like a paperweight or a decorative stone. For this reason Load Balancing and Always-on systems are mandatory. Cloud Administrator should choose multiple vDatacenter with stretched network or VPN site-to-site.
A cloud native application
A native cloud application is the last software development evolution, where 3 fundamental elements will build simple, complex application runnable in every infrastructure and used in several device.
|data||a simple or complex data repository such as big data, database, filesystems|
|microservice||the running code|
|interface||a port or url which caller could run the process|
But this simple element could be encapsulated by a more complicated structure called cloud environment, which is formed by the following elements:
|Business element or cloud element||data, code and interface|
|Scheduler||scheduler is a operational element which fight underling or other business structure|
|orchestrator||A governament element which combine business relations, federation and tenancy|
Following this model a simple cloud element could be a starting or a ending point for other cloud elements, and many cloud elements could be widely distributed across the world. By this way, these elements must be infrastructure independent and instantiated, destroyed, moved, replicated everytime and every underling environment.
This could represent a significant difference from wide/simple hosting from cloud provider.
A developer, who wants to run a cloud native application, should choose one of these way:
- wrapping traditional application: less time to spend but more integration and infrastructure works
- developing cloud code: more time to spend in development, but great performance in a cost controlled environment.
This second way doesn't exclude Sysadmin activities, because the need of a OS and adequate environment to run cloud native code, must be achieved with the "right" architecture. Photon OS and Docker are right answer for wrapping traditional apps or running cloud native code.
Docker: microservice encapsulation
Microservices are relatively small footprint code, which execute one or more business operation and communicate with other microservices or datastore, using their interface. But every microservice or traditional application, could be containerized: enclosed in a secure jail and managed by one or more directors which works on the underlying OS. This is Docker: a ship that handles, standardizes or moves container in the the OS.
Like your mobile phone, security is granted by the jail technique, that, in normal conditions, isolates every containerized app. Every inter-app connections are handled by docker. Between human and application there are many system elements that must be implemented, but in this level VMware rocks with a thin OS called Photon (small footprint, but a great functionality inside and outside the OS).
One of the common mistake, during the dockerize a new project is fork an hub image, enter in the container, modify the content and data, than save end export. I suggest: before use container, analize what is the container and follow these simple rules:
- prepare a container from a base or specilized docker image
- Use dockerfile and dockerfile syntax during the preparation
- "link" your data logic that must be "separated" from the container itself.
Sure, data logic could be encapsulated and distributed in a container, but the using traditional OS or distributed file system to store data would be a more reliable solution, to backup, move and protect data. This new method, teach developer to think that data must be handled outside the code, that data must be accessed only by another docker element that use or create a sure bridge between inside and outside world.
Just an example with Drupal development:
- In the traditional way, the needs to run drupal were OS, Apache PHP and MySQL and many addons to secure OS, Application, Environment (never think what about a clustered drupal service)
- In the docker way:
- Run nginx image and pass a basic configuration
- Run mysql database passing username, password and execution of creation drupal database (with granting option)
- Run drupal container with link to database and optionally drupal autoconfiguration variables
- Optionally run a NFS server (for data repository), GIT or FTP with link to drupal container, mysqladmin linked to mysql container
Photon: OS for Docker
Photon could be the answer for the problem of bringing a standardized system running on virtual infrastructure, cloud, or local workstation. It has a small footprint and a easy installation procedure. From vmware perspective this is a system building block, which you could deliver docker in every infrastructure.
There are some example using Vagrant or manually with iso image, to install and run photon: Photon™ by VMware®