Kyiv: _+38_ (044)_ 200_-93_-39_    forinfo@de-novo.biz

Private clouds with De Novo

Cloud Infrastructure is a complex based on repertory of the computing and data storage resources, network infrastructure, components and monitoring management. Cloud infrastructure provides the operating environment for the stored platform services and also defines the parameters (performance, scalability, fault tolerance).

The idea of the modern cloud infrastructure concept of problems resolving, continuity support and high availability of services is transferred from the application level to the level of the cloud infrastructure resources (resource pools) so that services inherit the characteristics of the resources pools on the basis of which they are running. For example, if there is a task to provide the disaster tolerance for a particular service or set of diverse services, we put them on the resources protected by the ‘Red button’ product, hence eliminating the need to build the expensive and complex cluster systems by means of applications. The disaster tolerance verification is a single-stage process, resolved on the level of disaster tolerance resource pool of the cloud infrastructure.

Similarly the problem of scalability is resolved: in case the infrastructure resources expansion is needed (caused, for example, by increase of the load increment or introduction of the new applications), we install only those resources in which we are experiencing shortages. Automatic load redistribution becomes the task of cloud infrastructure that is solved automatically and without interruption or downtime of the services installed on its platform.

Consolidation and virtualization of server infrastructure

According to statists majority of financial enterprises do not use all available resources. In spite of the significant costs spent on business software, these companies make use only of about 10% of their resources thus facing rapid lack of system capabilities (new software purchased requires operating system upgrading). Operating infrastructure consolidation and virtualization systems provide a solution to the issue and corresponding means for continued IT systems improvement.

The objective is to reject traditional approach of assigning a separate hardware platform (server) for each specific task and on the contrary adopting the resource approach. Thus, it is possible to group the available facilities according to functionality – into integrated pools of resources – and then, using visualization, divide them between services on demand. As a result, the company obtains its mobility: an operational systems (and functioning business applications) become independent from the hardware platform, ability to flexibly modify resources capacity assigned to the definite task. Besides application architecture optimization, the approach ensures failure-resistant service performance. An operational environment (application programs, services) inherits the features of the resource pool wherein it is being positioned. Using visualization at operating system level resolves the fail-resistance issue, and avoids logical data corruption.

As there is no need in cluster implementation (realization) at the level of application programs, the logical architecture of the information system is being significantly simplified.

Solution package contain the following items processes:

  1. Service resources and data storage systems consolidation into integrated resource pools
  2. Required amount of virtual servers for application location is generated on the basis of resource pools technology
  3. Failure-resistance and multipurpose resources management methods implementation, and dramatic simplification of the IT infrastructure.
  4. Design and in-line documentation package development.
  5. Stuff training

The first step in project implementation is the business requirement clarification aimed to investigate resource needs, define specific technology to be implemented and design target infrastructure that will meet all functional and non-functional requirements.

Desktop virtualization

What about workplace in an average company? Typically, company has a number of personal computers with the specific local resources in its exploitation. From time to time company faces with workplace malfunctions: hardware breaks, software failures. Their removal requires IT-service resources, and the company itself incurs losses caused by the unproductive down time. And the more developed is the infrastructure; the harder and more expensive becomes its maintenance.

Another complicity of the classical workplace infrastructure lies in implementation of business requirements and enlargement processes. Opening of the new branches or offices, and staff increase requires new jobs creation (purchase of the same computers and software) and finding solutions to problems of access to shared resources, business applications and services. This traditional approach not only leads to the creation of the expensive new local computing infrastructures and their maintenance complexities, but also to even operating staff increase.

In addition to the above mentioned factors, the bottlenecks of the classical approach are:

  • The complexity to ensure data safety and protection;
  • The labour-consuming modification introduction and the high cost of modernization;
  • Need for the regular infrastructure upgrade due to its functional depreciation and moral aging.

In order to solve all these problems and eliminate the complexity we offer to change the approach of the sustainable workplace construction — to move the operational tasks to the consolidated and secure Data Center, replace full-featured jobs with the impersonal devices to enter and display information (thin or zero-client). These devices do not get out or order while in service, require no maintenance and do not contain any business-critical information.

Primary, desktops and applications virtualization substantially reduces the cost of infrastructure and simplifies the service support. Hadware failure risk is reduced, no more need for the workplace upgrade due to the transition to a new version of the operating system or software update. We reduce the number of objects to support: creating several application patterns and virtual desktops for the different types of users so that their sequent maintenance is carried out automatically, in accordance with group policies. In case of changes necessity, you simply introduce them in those same several application patterns and virtual desktops, rather than at each workplace.

Secondly, jobs virtualization can provide a high level security — workplace simply does not contain any information that can be stolen. All processed data is consolidated in the Data Center that is much easier to protect.

Thirdly, the issue of time consuming creation of the new workplaces and access to new applications is resolved – only input devices and display information shall be bought, and virtualized workspace with all applications will be created automatically when needed.
In addition, this solution will significantly increase the operating life of the old PC — no need to replace them due to the transition to the new software versions, and modern terminal devices will expell them in a pattern of the natural rotation as they get out of order.

De Novo experts suggest:

  • Concentration of information storage and processing in a single Data Center, refusal from the full-featured local IT infrastructures at branches and offices;
  • Implementation of the effective tools for the remote access to the centralized IT services for business users of the regional network;
  • Replacement of fully featured PC at the business users workplace with the specialized devices for the input and display of information (‘thin clients’);
  • Creation of workplace management system to assist organizations in optimizing their workplace infrastructure;
  • Employee training tailored to the need of current administration skills, preparation of a full set of operational documentation.

While implementing these projects, we adhere to our principles of complexity and devoting every effort to meet customer needs. Along with the client, we precisely select those technologies that can achieve the main goal: to develop an effective infrastructure of the virtual workplaces. It will allow business users to perform their everyday tasks, and thus resulting in the meaningful decrease of the maintenance costs, once having solved the issues of data protection and minimized unproductive down time.

Virtualization of business critical applications

Being crucial parameters for business critical services (applications) availability, continuity and recovery within short terms require special approaches during virtualization. We meet the requirement above on the basis of VMware BCA platform. De Novo is the only company in CIS that complies VMware qualification criteria in this competence.

De Novo expertise ensures effective virtualization of business critical applications developed by Oracle, SAP, MS SQL and others. De Novo unique competence provides for:

  • Productivity comparable with one secured by RISC systems
  • Implementation of security policies for business critical applications in virtual environment
  • Safe allotment of shared resources due to allocation on the level of network, computing resources and storages

Virtualization of storage systems

The peculiarity of the data storage system (DSS),applied in most large Ukrainian companies, is island-like approach for its implementation. The main advantage of corresponding approach is defined in ability to build a disk storage for every single system or application to place an application itself and the data required for its operation. The main drawback of such approach is that while exploiting, some of the ‘islands’ will lack for disk resource and need to be modernized when the other ‘islands’ will have enough disk space even with the opportunity to spare disk space. These resources can be relocated since they represent physically different disk arrays (which might be also produced by different manufacturers).

Thus we recommend our customers apply an approach based on the functionally integrated resource groups. Within the scope of corresponding resource group the failure recovery, scalability and controllability tasks might be fulfilled at a time and with the maximum effectiveness. Storage system centralization is a pillar in resource approach implementation. Due to the centralized control processes, nonstop scaling and non-dubbed failures absence can be implemented most effectively by applying the centralized data storage technology.

The task of data storage system centralization can be fulfilled in two ways. First way is to completely replace the existing systems by the technical one which requires sizeable investments and ensures improved failure recovery and efficiency. Second approach is to implement visualization of the existing data arrays of the company. Moreover, for servers and user’s equipment the system is actually a huge disk array.

The data in corresponding storage system is allocated among several systems, applications, or servers, so it will significantly simplify back up process and data security. Therefore, a client saves their money purchasing cheaper vendor’s equipment, cutting the costs on support, getting imported system failure and disaster recovery.

At the survey stage (an initial stage of each project implementation) the main objective is to define target system premises and requirements. De Novo experts investigate the amount of data stored in the company, application requirements for the data warehouses efficiency, and company IT infrastructure requirements (monitoring, the data warehouse access, etc.). At this stage are being discovered and specified disaster recovery requirements, for instance, “does it have to be provided a synchronous backup center?”, which also influences the facilities and the defined approach.

After project implementation our customer obtains the data storage system equipped with the tool of proactive responding to the lack of disk resources issue. It enables IT department meet the business needs. Resource approach might be more than just about data storage. There are computational resources and data storage resources inside an IT infrastructure which simplifies their further scaling.

Private cloud building

Cloud Infrastructure is a complex based on repertory of the computing and data storage resources, network infrastructure, components and monitoring management. Cloud infrastructure provides the operating environment for the stored platform services and also defines the parameters (performance, scalability, fault tolerance).

The idea of the modern cloud infrastructure concept of problems resolving, continuity support and high availability of services is transferred from the application level to the level of the cloud infrastructure resources (resource pools) so that services inherit the characteristics of the resources pools on the basis of which they are running. For example, if there is a task to provide the disaster tolerance for a particular service or set of diverse services, we put them on the resources protected by the ‘Red button’ product, hence eliminating the need to build the expensive and complex cluster systems by means of applications. The disaster tolerance verification is a single-stage process, resolved on the level of disaster tolerance resource pool of the cloud infrastructure.

Similarly the problem of scalability is resolved: in case the infrastructure resources expansion is needed (caused, for example, by increase of the load increment or introduction of the new applications), we install only those resources in which we are experiencing shortages. Automatic load redistribution becomes the task of cloud infrastructure that is solved automatically and without interruption or downtime of the services installed on its platform.