Technology Architecture For Enterprise

From AccountIT
Revision as of 22:18, 31 May 2014 by Arvinder (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Introduction

The vision of the enterprise is providing a service with a minimum level of human intervention, thus the choice and use of technologies is essential driving the level and quantity of services that can be offered.

Technology drivers

Cloud

The core product of the enterprise will be a software service, and as the enterprise is not going to host / run a data center, the software service must be hosted / run on public cloud infrastucture (such as Amazon WebServices). With the major part of the cost associated with the cloud infrastucture it becomes essential that the service is not tightly tied with a particular cloud provider, giving the opportunity to change cloud provider should a cheaper option apear. As cloud data-centers are lease based, it is essential that the all systems (AccountIT as well as business systems) must be horizontally scalable ("elastic scaling") such that the systems can "scale down" i.e. reduce the use of "HW" resources like CPU, memory, hard-disk and network. Scaling must be automatic to provide with a "24 / 7" illusion to the user, while keeping the cost of leasing the "HW" resources is kept to a minimum. "HW" in this context is virtual as it is provided by a "Cloud provider" like Amazon WebServices.

Open Source

Likewise the software license for the 3rd party frameworks and libraries must have a cost structure that is easy to maintain and allows for dynamic growth (in connection with sudden demand). The ideal is use of Open Source framework / library code with the freedom of using it in our products without the need to "open source" it. But also proprietary software is an option as long as the license scheme provides the opportunity for use in elastic deployments, i.e. Cloud based with no fixed scaling (such as number of cores, nodes or the like)

Guiding Principles

The main guiding principle, for choosing technologies is that:

We want to have has few technologies as possible, yet for each problem we want to have the best possible technology.

  • We want few technologies, because the fewer we have, the easier they are to master. At the same time we want the best technologies, because they solve our problems in the most efficient way.
  • We should never have two different technologies for solving the same problem.

Technologies

UI Frameworks

The main driver for User Interface is its usage requirements, which has one goal - "easy to use", but has two perspectives:

  • "Easy to use" for development to support a "fast time to market" approach
  • "Easy to use" for users who are presented with an interface that is "well known" (i.e. using UI controls that is common on web).

Highest priority is "user friendly. The reason for this is:

  • We are going with the "self-service" concept, and thus customers must be able to use the system without need for support, and an essential component for this is that the system works "as one would expect" (may not be as simple as it sounds!)

There will be a number of different UI's, so the framework must support a wide range of presentation and control tools. Some of the UI's within the enterprise will be:

  • The primary product (AccountIT); an accounting system which many people from the customer organization will interact with.
    • The accounting staff; providing input to the books.
    • Management; viewing statistics
    • Accountants; auditing and editing the books
    • System administrators; granting users access
  • The enterprise "ERP / CRM" system; a system containing all the customer (the CRM part) (i.e. who are using AccountIT) and managing the customers (life-cycle, i.e. getting new customers, billing them, and "cleaning up" when customers leave)
    • Service staff; assisting customers with issues
    • Billing staff; collecting usage statistics and making bills, collecting money, and updating books to ensure all outstanding are handled. I.e. the enterprise use of "AccountIT"

Messaging

The use of messaging provides with an opportunity to support an asynchronous system, where sub-systems can interact without depending on availability of each other. This makes the system more robust, and easier to maintain as dependencies are removed. All interacting sub-systems only have a dependency on the messaging sub-system. But this makes the choice of the messaging system essential, as it must be fault tolerant (so we have a virtual "24 / 7" messaging system) and flexible (provide a wide range of sub-systems to connect). Use of messaging requires the "acceptance" of the CQRS pattern due to the CAP theorem; as a consequence most "UI" activities will not be completed on demand, but rather in background, and the completion of the task will be reported back through some means (e.g. notification).

In memory data-grid

Although the general pattern for handling user requests is asynchronous, i.e. the user is on a request given a notice that the request has been received and upon completing the user will be notified of this via some channel (e.g. a message in the users inbox). Use of caching mechanism will be used to reduce look-up from persistent data-stores. This is especially useful in cases where the data in the cache is almost static (e.g. user profile - phone no., email address, etc.). Caching can with benefit be implemented using "in-memory data-grid", which are clustered caching solutions that also to some degree support that the data is persisted for support of dynamic node life-cycle within the in-memory data-grid cluster.

Datastore

Data persistens is essential to any system, as all systems require storing of state. AccountIT product will require storing of the customers books, while the business systems will require storing of customer account information as well as e.g. usage information for creating billing. Due to its central role within all systems, data-stores usually end up being the bottle-neck as well as the "weakest link" in the system complex. With a Cloud setup we require that data-storage also has the same performance and scalability qualities as we require for the computational parts of the system. This requirement forces us to move away from the "standard RDBMS" solution, as these are based on "single instance" solutions, with a focus on specialized HW to provide performance. NoSQL is the current response to RDBMS short-comings. The NoSQL model is to remove the advanced query capabilities provided by SQL, in exchange for a distributed storage capabilities enabling simple cluster support for data-storage. The latest within data-store technologies are NewSQL daat-stores, which try to provide the best from both RDBMS (i.e. query capabilities) and NoSQL (the distributed data-store). We may need to mix the two, or even look at using RDBMS for e.g. BI solutions where response time is less of an issue, but the capability to analyze data from many perspectives is essential.

Operating System

From the cost reduction requirement we will go with a Linux OS. But within the Linux option selection is still required.

Monitoring

With all "components" (messaging, UI, data-stores, etc.) consisting of clustered instances, monitoring will need to focus on the state of the cluster. i.e. how the total capacity of the cluster is doing. We will not focus on the individual instances, thus joining and leaving the cluster will not be "noticed" by the monitoring other then change in the capacity. Thus it is essential that a selected technology supporting clustering, must provide with an API that enables access to statistics of cluster performance, such that it is possible to generate aggregates that a monitoring system can use for alerts and notifications. Finally as the individual instances are not monitored, there is a need that some self-check mechanism is used to insure that "non-productive" instances are shut-down to avoid "resource leakage", i.e. having instances using cloud resources (like CPU, memory and disk), but not providing any benefit to the running of the systems.

Deployment

With cloud hosting the deployment has to be automated, as spawning a new instance and deploying the system is done via a API. There are two distinct strategies regarding deployment:

  • deployment system using an agent to deploy on the particular instance. Puppet is an example of this, where an image with the bare-bone OS and Puppet agent are deployed. When the Puppet agent starts it will look-up the Puppet master and request for its node configuration. The Puppet agent would periodically check with the Puppet master of its configuration and based on this update the node setup.
  • deployment of an image with the complete system. Packer is an example of this, where an image is "baked" with all the required components. The image is spawned when a node is requested from the cloud system.

Each of the two have pros and cons, and the choice will reflect the priority these have.

Child pages