How to modernize the mainframe in a fast-changing banking industry

On one hand, our clients face changing controls and rivalry from new kinds of banks entering the market. Keeping up methods we are continually pushing ahead—conveying new applications, enhancing the client experience and quickly stretching out new usefulness to on the web and versatile channels.

Then again, we should keep up our clients’ center saving money frameworks with nonstop accessibility and perfect security. These COBOL-based frameworks have been around for quite a long time are as yet basic to bank activities.

The hole between these two universes is extending as COBOL software engineers are resigning, supplanted by another age of designers. In this manner, we chose to construct an extension—one that will keep the two universes in sync, even as the banks advance.

Simplifying layers of complexity

We’ve constantly kept up a multi-level cross breed engineering to associate front-end, appropriated frameworks with backend IBM z/OS stages, utilizing IBM’s Information Management System (IMS) programming. This enables us to fabricate new Java-based applications that can get to information and exchanges from center saving money frameworks.

Notwithstanding, this engineering can get quite perplexing, with an entire layer of pay rationale expected to intercede exchanges. This unpredictability backs us off as we convey new applications and administrations to showcase.

Our test was to disentangle the design. We required present day, reusable programming segments with movability crosswise over both appropriated and z/OS situations, however we likewise needed to secure our interest in existing business rationale.

A groundbreaking approach to mainframe modernization

The arrangement was to carry Java into the IMS creation situations. We worked with IBM to make a typical run time condition inside IMS, making Java and COBOL inter operable. Truth be told, we were the primary organization on the planet doing this and we produce around 180,000,000 IMS exchanges multi day. Presently this innovation is accessible to everybody.

We presently have a more straightforward design, with more tightly reconciliation between center managing an account frameworks and new, circulated applications. This gives us greater adaptability and makes improvement less demanding, so we can assist our clients take creative administrations with marketing quicker, from the web and portable applications to ATMs and past.

Sooner or later, our front-end designers won’t know whether they are calling Java administrations or IMS exchanges in light of the fact that everything will be gotten to in one steady way. Today, in excess of 80 percent of our absolute outstanding burden is Java-empowered. A Java-empowered IMS exchange may call Java programs running in the IMS Java JVM. Along these lines IMS exchanges can utilize the tremendous number of Java standard libraries and outsider programming items. It likewise enables us to code new business rationale utilized in the IMS exchange in Java rather than Cobol and to relocate Cobol projects to Java.

It’s a progressive procedure, and we’ll at last achieve 95 percent of the all out remaining task at hand being Java-empowered.

This blend of Java and COBOL in the IMS condition is a decent method to modernize the center applications on the centralized server well ordered. We’re conquering any hindrance among old and new, helping our clients utilize their current speculations to empower what’s next in managing an account.

More details : Mainframe

Mainframe Security: Debunking Myths and Facing Truths

Today’s mainframe is well positioned to support ever-evolving digital business environments, however, one important piece of the puzzle can sometimes be overlooked—security. IT pioneers must stand up to specific fantasies about centralized computer security, go to build up the correct security act for the present computerized change. Would you be able to answer the accompanying inquiries certainly? How secure is my centralized server? OK know whether you had security vulnerabilities? Would we pass a consistence review?

The possibility that Mainframes are invulnerable depends generally on legend and urban legend. Truth be told, centralized computers can conceivably make alluring prey for programmers and those with a noxious goal on the grounds that:

  • Mainframes have IP delivers and are presented to exemplary digital dangers
  • Undertakings frequently don’t know they have been hacked in light of the fact that they are not getting ongoing notices
  • Programmers are truly adept at covering their tracks and their essence can go undetected for a considerable length of time
  • Stolen passwords give simple access to your most business-basic information
  • Shielding against programmers infusing malware is amazingly troublesome

There are numerous fantasies about centralized computer security, however truly, strong, demonstrated systems exist to anchor your centralized computer. I trust the accompanying encourages you to isolate reality from fiction

Legends – Mainframes Are Not Hackable and Are Not at Risk

Mainframes here and there tumble off big business security radar and IT security experts think they are not at genuine hazard as a result of the inborn security incorporated with the frameworks, yet that is a deception.

IT multifaceted nature adds to the danger of security holes. The truth of the matter is we manage numerous IT universes and multi-faceted conditions with immeasurably unique working frameworks and projects. They talk diverse dialects, and less and less experts are conversant in every one of them. Since it takes roughly 200 days to distinguish a break, centralized server experts must be much increasingly watchful, even as it motivates more enthusiastically to do as such.

Information rupture. Two chilling words no security proficient needs to ever hear. Lamentably, centralized computer information may be among the most powerless, due to the touchy and alluring nature of the information. Consider stock exchanges fund or cash moves in assembling or government. The cash trails and IP delivers lead to centralized servers, so venture security stances must incorporate that stage. The chances of a future information break have expanded, and normal all out expenses have developed to about $4 million for each lost or stolen record. It isn’t just difficult to recuperate from money related misfortunes like that, it very well may be agonizingly hard to recoup from a discolored brand.

The Truth – The Mainframe is the Most Securable Platform

It is more imperative now than any time in recent memory to brace the basic business information that lives on your centralized server. With the correct devices, IT security experts can catch operational information and manufacture helpful 360-degree centralized server sees that alert progressively. You can distinguish and address for hazardous conditions and demonstrate consistence for both national and global security directions and consistence commands. Search for out-of-the-case capacities and review scorecards that assistance you meet prerequisites put forward by PCI DSS, HIPAA, GDPR and different measures.

Organizations need and need to anchor their information, however, accomplishing that objective can be grave for extended IT groups. Luckily, innovation and administrations have adapted to present circumstances. For the mainframe to stay suitable, it must be basic and clear for IT to keep up and improve imaginative innovation, that incorporates robotization, connection and security. At Maintec, we have found a way to help our centralized computer customers reinforce their security pose with our AMI for Security contributions and our implicit industry-driving centralized computer SIEM innovation.

Since you find out about diverting legends and connecting facts, you have the learning to expose centralized computer security fantasies and grasp the certainties with a develop security pose. It is basic for you to make the correct strides and plan for your venture wide security. Maintec is here to help. Find progressively about Maintec Automated Mainframe Intelligence (AMI) here. More details: Mainframe outsourcing

Disaster Recovery – The SAVIOR of your Business

Disaster recovery by Maintec

What might happen to your life or your business assuming abruptly, and startlingly, you lost the majority of your valuable computer data? Imagine a scenario where you had no contact data for your clients or customers, no records of your business exchanges, no monetary records, no documents, no structures. Would your business have the capacity to work, and assuming this is the case, to what extent would you genuinely be down before you were back to the same old thing?

In spite of the significance of our PCs and valuable information that they hold for us, actually most organizations and people don’t have ANY backup system or plan set up! Coherently, we as a whole comprehend the significance of having a PC backup system set up, yet the lion’s share of us has lack of backup system frameworks or more terrible, no computer backup system by any means!

It’s Not A Question of If It Will Happen, It’s what if it happens..

Awfully numerous organizations tragically think a disaster can’t occur to them, and many have officially paid the consequences for this frame of mind. They don’t see disaster recovery plans as a need. They would prefer to occupy themselves with “genuine” issues that they confront now, not with some “consider the possibility that” situation that may never occur by any stretch of the imagination. What’s more, it’s reasonable as well.

The attitude that an adequate disaster recovery plan is something that your business can put is truly a recipe for disaster. In the world of computers, disasters can (and will) happen. It’s not a question of “if”, it is a question of “when”. Will you be prepared?

In the event that your business depends on information, you require a server farm disaster recovery plan. Set up your server farm disaster recovery plan today! Studies have shown that many businesses fail after experiencing a significant data loss, but DR can help.

Recovery point objective (RPO) and recovery time objective (RTO) are two important measurements in disaster recovery and downtime.

RPO is the maximum age of files that an organization must recover from backup storagefor normal operations to resume after a disaster. The recovery point objective determines the minimum frequency of backups. For example, if an organization has an RPO of four hours, the system must back up at least every four hours.

RTO is the maximum amount of time, following a disaster, for an organization to recover data from backup storage and resume normal operations. In other words, the recovery time objective is the maximum amount of downtime an organization can handle. If an organization has an RTO of two hours, it cannot be down for longer than that.

Services offered for Disaster Recovery on i :

Constant Replication (HOT Recovery)

For all the mission critical IBM i applications which have critical RTO and RPO we recommend a High Availability (HA) system with constant replication.

We at Maintec provide DR Sites where the data shall be constantly replicated to our DR Site with the help of the replication software.

In the event of disaster, the HA/DR instance will be taken over with minimal outage. The outage in this case shall be for few minutes to an hour.

Online Backups (WARM Recovery)

Maintec Online Backup solution for IBM i (AS400, iSeries, i5, System i) involves a save of your IBM i Production server to an on-premise Virtual Tape Drive at our data center.

The initial save shall be followed by a periodic daily change to a data vault in Maintec data center.

In the event of disaster, we shall be able to take the data vault and restore the data to an IBM i server. The outage in this case shall be for 6 – 24 hours.

Tape Recovery (COLD Recovery)

Maintec Tape Recovery solution for IBM i (AS400, iSeries, i5, System i) involves saving the Customer’s data in a fireproof vault at our data center.

On a periodic basis Customer’s would send us the complete system save backup tape to the Maintec data center for storing and recovery purposes.

In the event of disaster, we shall be able to take the backup tape and would be building your LPAR on an IBM i Server. The outage in this case shall be for 24 – 48 hours.

The Future of Data Infrastructure

In spite of the fact that HCIs join figure, stockpiling and system assets into a solitary virtualized framework, they are not without wasteful aspects.

The “broadly useful” server farm models that have served so well in the past are achieving their points of confinement of versatility, execution and effectiveness, and utilize a uniform proportion of assets to address all register handling, stockpiling and system data transfer capacity prerequisites. The one size fits all’ approach is never again viable for information serious outstanding tasks at hand (for example enormous information, quick information, investigation, man-made brainpower and machine learning). What is required are capacities that empower more authority over the mix of assets that each need so that enhanced dimensions of handling, stockpiling and system transmission capacity can be scaled autonomous of each other. The end objective is an adaptable and composable framework.


Figure 1: Today’s data-centric architectures

In spite of the fact that hyper-merged foundations (HCIs) join register, stockpiling and system assets into a solitary virtualized framework (Figure 1), to include more stockpiling, memory or systems administration, extra processors are required. This makes a settled building square methodology (each containing CPU, DRAM and storage)that can’t accomplish the dimension of adaptability and unsurprising execution required in the present server farms. In that capacity, Composable-Disaggregated Infrastructures (CDIs) are turning into a prominent arrangement as they go past joining or hyper-meeting IT infrastructure  into a solitary coordinated unit, however streamlines them to enhance business readiness.

The Need for Composable-Disaggregated Infrastructures

Given the difficulties related with broadly useful structures (settled asset proportions, underutilization and overprovisioning), met foundations (CIs) rose conveying a preconfigured equipment assets in a solitary framework. The register, stockpiling and systems administration segments are discrete and overseen through programming. CIs have developed into HCIs where the majority of the equipment assets are virtualized, conveying programming characterized figuring, stockpiling and systems administration.

Despite the fact that HCIs join figure, stockpiling and system assets into a solitary virtualized framework, they are not without wasteful aspects. For instance, versatility limits are characterized by the processor, and access to assets are made through the processor. To include more assets, for example, stockpiling, memory, or systems administration, HCI designs gives extra processors regardless of whether they are not required to bring about server farm engineers endeavoring to construct adaptable frameworks yet are utilizing firm building squares.

From an ongoing study of more than 300 moderate sized and vast venture IT clients, just 45 percent of all out accessible capacity limit in an endeavor server farm framework has been provisioned, and just 45 percent of figure hours and capacity limit are used. The settled building square methodology exhibits underutilization and can’t accomplish the dimension of adaptability and unsurprising execution required in the present server farm. The disaggregated HCI show should be empowered and turn out to be effectively composable, of which, programming devices depends on an open application programming interface (API) is what’s to come. 

Introducing Composable Disaggregated Infrastructures

A composable disaggregated foundation is a server farm structural system whose physical register, stockpiling and system texture assets are treated as administrations. The high-thickness figure, stockpiling and system racks use programming to make a virtual application condition that gives whatever assets the application needs continuously to accomplish the ideal execution required to meet the remaining task at hand requests. It is a rising server farm section with a complete market CAGR of 58.2 percent (determined from 2017 to 2022).


Figure 2: Hyper-Converged vs. Composable

On this foundation (Figure 2), virtual servers are made out of free asset pools included process, stockpiling and system gadgets, as opposed to parceled assets that are designed as HCI servers. In this manner, the servers can be provisioned and re-provisioned as required, under programming control, to suit the requests of specific remaining tasks at hand as the product and equipment segments are firmly coordinated. With an API on the creating programming, an application could ask for whatever assets are required, conveying constant server reconfigurations on-the-fly, without human mediation, and a stage toward oneself overseeing server farm.

Hyper-Converged versus Composable

An organized convention is similarly critical to a CDI empowering register and capacity assets to be disaggregated from the server and accessible to different applications. Associating figure or capacity hubs over a texture is vital as it empowers numerous ways to the asset. Developing as the leader organize a convention for CDI usage is NVMe™-over-Fabrics. It conveys the most reduced start to finish dormancy from application to capacity accessible and empowers CDIs to give the information territory advantages of direct-appended capacity (low in activity and elite), while conveying readiness and adaptability by sharing assets all through the endeavor.

Non Volatile Memory Express™ (NVMe) innovation is a streamlined, superior, low idleness interface that uses a design and set of conventions grew explicitly for industrious blaze memory advances. The standard has been stretched out past nearby joined server applications by giving a similar execution benefits over a system through the NVMe-over-Fabrics particular. This determination empowers streak gadgets to convey over systems, conveying a similar elite, low dormancy benefits as nearby appended NVMe, and there is for all intents and purposes no restriction to the quantity of servers that can share NVMe-over-Fabrics stockpiling or the quantity of capacity gadgets that can be shared.

Final Thoughts

Information serious applications in the center and at the edge have surpassed the abilities of customary frameworks and structures, particularly identifying with versatility, execution and productivity. As broader useful frameworks are bolstered by a uniform proportion of assets to address all registers, preparing, stockpiling and system transmission capacity necessities, they are never again viable for these assorted and information serious outstanding burdens. With the coming of CDIs, server farm engineers, cloud specialist organizations, frameworks integrator’s programming characterized capacity designers, and OEMs, would now be able to convey capacity and figure administrations with more noteworthy financial matters, dexterity, productivity and effortlessness at scale, while empowering dynamic SLAs crosswise over outstanding burdens.

More details: Data center management

Mainframe Technology has a FUTURE

Mainframe technology supplier BMC trusts that the technology has a splendid future. BMC’s 2018 mainframe study, which surveyed 1,100 administrators and IT specialized experts, found that 92% respondents anticipated long haul, stability of their mainframe systems – the third sequential year this rate has expanded.

“Most associations have a lot of extensive, complex applications integrated that, to be honest, the exertion required to just revise so you can run it and work it in a cloud-only stage is exceptionally cost restrictive, and amazingly risky,” said John McKenny, Vice President of strategy for ZSolutions, at BMC, headquartered in Houston.

BMC does not have numerous clients moving outstanding tasks at hand to the cloud, since they don’t see the long-term cost benefits, he said. At the point when clients assess the expense of designing, architecting and relocating that componentry of their architecture, “there’s just not an economic benefit that holds water,” McKenny said.

Flaesch agreed that the regularly held thought that cloud is less expensive isn’t in every genuine case.  “We have plenty of evidence of folks moving workloads back from the cloud,” he said.

There are distinctive parts of the mainframe market to consider from a channel accomplice’s viewpoint: the systems software, the application software, the staffing elements, the hardware and the hosting, which are all “feeling different kinds of pressures,” Flaesch said. 

The best open door for partners is in client environments which have a significant arrangement of workloads and applications that should be stayed up with the latest as indicated by DXC. For instance, Flaesch said DXC has a transportation client that runs high demand forecasting and custom logistics applications. “So when it began … running its huge, cutting edge upkeep and IoT-driven support applications, doing that on a mainframe was a characteristic thing for them to do,” he said.


Flaesch trusts that the general agreement is that, mainframe technology is as yet the most savvy approach for running a substantial arrangement of remaining tasks at hand that should be collected. “[Mainframe systems] will be the most reliable and cost-effective, and it will give you the best service levels of any environment you can have, hands down,” he said.

The limitations to that would be when you don’t know how many workloads you’re going to be running, how fast they need to ramp up, and when IT is not certain about the type of applications it needs to develop and how portable they need to be, he added. For more details Mainframe Outsourcing

Does AI require high-end infrastructure?

There’s no deficiency of buzz around man-made consciousness applications in the general population division. They’ve been touted as something of a computerized panacea that can address the majority of an organization’s issues, regardless of whether it’s a chat bot offloading work from client administrations staff or supporting in misrepresentation identification. Still questionable, in any case, is the thing that framework organizations must have set up to benefit as much as possible from AI.

Information science groups are spending not exactly a fourth of their time on AI demonstrate preparing and refinement since they’re buried in foundation and sending issues, as indicated by a review by machine-learning arrangement firm Algorithmia.

To address those issues, a few merchants state that superior registering is the must-have thing for organizations hoping to dispatch AI ventures. In a whitepaper Intel distributed in September, the organization plot why the two go so well together. “Given that AI and HPC both require solid process and execution abilities, existing HPC clients who as of now have HPC-upgraded equipment are very much put to begin exploiting AI,” as indicated by the paper. The PCs additionally offer clients the chance to enhance productivity and decrease costs by running various applications on one framework.

A 2017 give an account of AI and HPC combination put the accentuation on versatility. “Versatility is the way to AI-HPC so researchers can address the huge figure, enormous information challenges confronting them and to bode well from the abundance of estimated and demonstrated or reproduced information that is presently accessible to them.”.

Lenovo likewise perceives the association, reporting a year ago a product answer for facilitate the intermingling of HPC and AI. Lenovo Intelligent Computing Orchestration (LiCO) helps AI clients by giving formats they can use to submit preparing employments – information sustains that will help AI applications realize what examples to search for – and it lets HPC clients keep on utilizing direction line instruments.

In any case, organizations that don’t have elite machines shouldn’t lose hope, as per Steve Conway, senior VP of research, head working officer and AI/superior information investigation lead at Hyperion Research Holdings.

“You can get into this with – a ton of times – the sorts of PCs you have in your server farms,” Conway said. “All of the offices have server farms or access to server farms where there are server frameworks or bunches, and you can run a portion of these [AI] applications on those.”

A primary advantage of top of the line PCs is that they can move, process and store substantially more information in brief timeframes, and AI is information escalated. In any case, odds are that if an organization doesn’t have HPC, it doesn’t have a requirement for ultra-refined AI.

“At the plain bleeding edge of this stuff, you truly do require superior PCs, however the uplifting news there is that they begin at under $50,000 now, so they’re not excessively costly, and there are a great deal of people who don’t have to spend even that sort of cash,” Conway said. “They can utilize the hardware that they have and begin investigating and exploring different avenues regarding machine learning.”

The greatest use cases for AI are misrepresentation and peculiarity recognition, self-ruling driving, accuracy prescription and fondness showcasing, which Conway said is the numerical twin of extortion and irregularity discovery yet with various objectives. In discovery, the goal is to recognize the “weirdo,” he stated, though alternate searches for however many comparative information focuses as could be expected under the circumstances.

Yet, being AI-prepared is about more than the machines that control it, said Adelaide O’Brien, examine executive for IDC’s Government Digital Transformation Strategies. To be best with AI, organizations must get their work done.

“It’s extremely critical to have great, essential information the executives rehearses,” O’Brien said. “I realize that is not charming and it’s not energizing, but rather [agencies] need to guarantee that there’s data get to. They additionally must have the technique set up” and an “extremely vigorous information establishment” for machine learning, she said. “You have to prepare that machine with parts and bunches of information.”

She likewise prescribed reporting information sources to guarantee the data’s veracity and guaranteeing differing tests. “You don’t need it dependent on constrained statistic data or even a prevalence of recorded information – which government offices have a great deal of – in light of the fact that that may not mirror the present reality,” O’Brien said. “It’s so vital to prepare that machine on important information.”

The AI business is very much situated for development. All inclusive, the business esteem got from AI is anticipated to add up to $1.2 trillion of every 2018, as indicated by research firm Gartner. Also, that business esteem could hit $3.9 trillion by 2022.

In September, a few congresspersons presented the Artificial Intelligence in Government Act, which would “enhance the utilization of AI over the government by giving assets and guiding bureaucratic organizations to incorporate AI in information related arranging.”

To stay aware of AI, organizations don’t need to hold on to secure HPC. “It’s imperative to begin,” Conway said. “It doesn’t take fundamentally an over the top expensive IBM Watson to do this sort of stuff. They’re doing it with the sorts of ordinary group PCs that are, exceptionally regular in both the general population and private parts.”

more : IT infrastructure

Four Data Center Colocation Trends to Watch in 2019

Colo suppliers expect a flood in big business driven by half breed cloud and new, present day instruments for devouring their administrations.

Colocation suppliers hope to reel in significantly more venture business in 2019, as undertakings reconsider foundation and retool, disposing of as much on-premises server farm space as they can, supplanting it with cloud administrations and – when important – present day colocation offices.

As Clint Heiden, boss income officer at QTS, clarified, ventures in social insurance, budgetary, producing, and different enterprises that fabricated their own server farms about 10 years prior are presently acknowledging it can cost a huge number of dollars to get those offices up to current measures.

They’re additionally understanding that they require significantly less server farm space for a similar framework, “making a revive exceptionally cost-restrictive,” he included. Progressively, they’re swinging to colocation as the option, where they can both get up and coming framework and access to cloud suppliers, frequently at a lower cost than keeping everything in-house.

To make themselves progressively valuable to these organizations, numerous colocation server farm administrators have been building computerized devices to make a client experience and usefulness that feels a ton like open cloud. The center standards here are deliberation of the physical, mechanization, microservices, APIs, simple programming based provisioning, and brought together administration of various kinds of framework, be it cloud, colo, or on-prem.

Having the capacity to deal with a blend of framework is a vital part for these stages. Half and half cloud is on the ascent – a pattern featured by the measure of crossover cloud items and highlights hyperscale stages revealed for this present year – and colo organizations are situating themselves as where private, client controlled framework meets open cloud.

Colocation suppliers are likewise beginning to search for ways they can enable clients to get their application framework physically closer to end clients, both to enhance execution and to tame system transfer speed costs.

Hyperscale cloud server farms commanded the discussion this year, James Leach, VP of promoting at RagingWire Data Centers, let us know. In any case, the year has likewise observed a portion of the first historically speaking organizations of edge figuring framework at remote towers. “What about another server farm design that joins hyperscale and edge to make ‘fence’ server farms?” he said.

Data center Knowledge as of late studied various administrators from the main server farm colocation suppliers about their desires for the business in 2019, and four patterns rose.