This week Fujitsu Siemens is announcing a new line of BS2000/OSD servers - the SQ Business Servers. The first to ship will be the SQ100 using dual core Intel Xeon MP processors. At the Fujitsu Siemens Data Center Symposium in Cologne, DCT's Ian Murphy got a chance to talk to Dieter Herzog, Executive VP Enterprise Products, Fujitsu Siemens Computers about why the mainframe is back, virtualisation, ITSM and storage.
You have announced a mainframe based on Intel Xeon with Windows and Linux support. What is driving this mix of technologies?
There is a general trend of the marketplace to go back to large systems. This is being driven by the ability for mainframe style technologies to be run on industry standard hardware successfully.
In the past, the problem we had with fully distributed computing was one of control. Virtualisation, automation, the management of complex architectures are better managed using mainframe approaches. Until recently, we were not able to apply these mainframe techniques to the standard architectures that customers were using.
The big advantage of the mainframe is that they are stable. They have been tested over decades and because of that they are very high quality. What we want to do is combine innovative technologies with the stability of the mainframe.
What this means for us is that when introducing any innovation we have to ensure that we do not disrupt the stability of the hardware in any way. We also have to ensure that the lifetime of the components is comparable for the mainframe. To do this, we had to get a processor from Intel that can be supported over several years. This was not possible in the past.
How easy is it to re-introduce the mainframe approach to a world where commodity hardware is dominant?
We see a lot of people in large datacentres where there is a long history of mainframe technologies. They understand the needs and demands of the hardware and how it works.
The younger workers are coming out of the departmental computing environment not the heart of the datacentre. They are used to the commodity hardware but also see the benefit of stability. We are seeing both coming together without any problems at all.
In the commodity world, the pace of software changes, updates and patches means that stability is sometimes measured in hours rather than months. How do you change this with a mainframe approach?
One thing we do is spend much longer testing on the Primery hardware. We won't release upgrades for every step of processors. This helps to keep everything stable.
With FlexFrame we define a solution as a packages of products. We will not update different products inside the package, instead we will maintain each package as a single solution. This makes it easier and faster to deploy and a more stable solution.
Customers do not want the next release of an OS destroying the solution, they want it to remain stable. This is important to us and is something we have always done and will continue to do.
There is a big push to automating everything, especially as demand grows and workforces shrink. Are you seeing a demand for increased automation?
When we introduced FlexFrame for SAP 3 years ago, we spent a lot of time working on how we could use virtualisation and automation at the application process level. This allowed us to look at the application processes and distribute that over, for example, a blade farm. This had not been done before and we were not sure how far we could reduce costs or how far we could drive the entry point into the market.
The results astonished us. By focusing on the application process we were able to improve performance and improve the user experience. With standard hardware, we were also able to reduce the costs.
The first customers we targeted were Small and Medium Business (SMB). They didn't want to manage the processes, they just wanted to leave it alone to work. This was very successful and we realised that it could also work for customers with a much larger number of users.
When we went to the large datacentres and talked about the automated solution they were not impressed. They wanted control over the automation level. They like to keep certain processes manual and we had to go back and add this into the product. The more we went to large customers with resource management processes, the more they wanted to decide things and do things themselves.
Virtualisation and mainframes have been a partnership for a long time. Now it is a part of everyone's datacentre strategy. What are you doing here?
Virtualisation is a tool to increase utilisation of the systems and we have to manage it. We have a tool targeted at application specific areas for virtualisation and it is delivered by our professional services team as part of the FlexFrame sales and support cycle. It is not a tool that we sell separately.
We go through the customer requirements for a first version of the configuration of the application by monitoring the system and the application. We then use our own experience to concentrate on those parameters we know the applications need. This is focused on specific application and not a mixed mode environment so we can tune the solution.
We touched on automation earlier. Automation and virtualisation are need a proper management infrastructure, especially when you talk about mainframe level stability. Where are you with the ITIL and ITSM story?
We are further than you may think on ITIL. Two years ago, you may remember that we acquired the Siemens professional services division as part of our services planning. We use ITIL at the service level to plan the professional services we deliver to customers. As part of that, we have developed reporting and control tools that make it easier to monitor what it going on for the customer. These tools are necessary in order to do any service delivery and have been tested with our large customer accounts.
At the beginning of FlexFrame we did not make professional services part of the product. This was one of the reasons behind the acquisition of the Siemens Professional Services division. Now we make professional services mandatory and that allows us to ensure there is proper support for ITIL. It is not unusual for people to have problems with service levels and that is why we make this part of the managed services approach
A subject that is getting a lot of attention is data storage and cloud computing. There are reasonable concerns over who companies should be trusting and how they can ensure their data is handled safely, securely and within the constraints of compliance legislation. Can a vendor ever be the trusted partner here?
People don't want to give away their data. The problem is not the concept of storing the data offsite but one of trust. They don't want to give away their data and they don't really know who they should trust but that is changing.
We are in the process of running SAP datacentres for all their storage. They do the internal development of their software and we look after the data. All the rules and control are defined b SAP but we help to define the managed storage portfolios.
This is then provided to the SAP resellers who are the people who own the customer. The customer does have a relationship with us, they have one with their SAP reseller. The dealer is the trusted entity, all we do is manage the location where the data is stored.
But what about the problem of competing national legislation over data protection
There are issues in some European countries but we are working on a concept is to take the actual data storage closer to the customer. This might mean having local backup at FSC sites. The key is the local law and what we have to do for customers.