Description: The article is about keybank login. The article mainly introduces us about the history and the features of the Key Bank including how they got the achievements,what their expectations were and some of the approaches they took to get there.
I’m Matt Malick with KeyBank. I’ve been there for 10 years. I’m the domain manager with QA organization enterprise. We’re responsible for a couple thousand applications. We need to do some validation and testing to anything that goes on within the bank. My specific role is share testing services such as something non-functional. We try to provide it for anything they need.
I see familiar faces as well. You’ve heard me before as well. I’m CTO for TDM solution. I help and advise our customers on setting up TDM practice within their organization. For last two to three years I’ve been leading the product management for the test data management solution as well. I’ve been involved with the product team as well to further improve the product.
Now I will give you the brief agenda. We’re going to talk about what our challenges were,how we got there,what our expectations were and some of the approaches we took to get there. It’s a little bit of background about Key Bank. Key Bank has placed in 15 States,most of them are the northern part of the United States out there from Maine all the way over to Seattle. If you’re not familiar with us you’re probably in a little bit south of where we wear our footprint. But we’re growing constantly.
That is one of the opportunities that we had in test data management. It was an acquisition that we did a few years ago or a year ago at this point by the time the ink was dried. We’ll talk a little bit more about that. We’re going to talk about a specific use case where we use to test data management. It’s in continuous delivery opportunities that we had out there. With all test data management,we’ve had challenges where we need to get product out and we need to get functionality out as quick as possible. Test data is always a challenge.
If it’s a long waterfall process out there,you have weeks months a year to try and provision the data that you need and maybe you can get it in place. There’s always those one offs and they’re still challenging. But for the specific use case,we’re migrating to a true CDCI implementing DevOps for a new online banking solution. Last year,we finalized an acquisition with First Niagara in doing that. We realized that our online banking application probably wasn’t going to be up to snuff to handle that large of a volume. We’re looking to bring the functionality to new customers.
There is great opportunity to bring. I’ll say quasi agile methodology. It was more of a mullet. I agile in the front,waterfall in the back. There were fabulous opportunities out there but we are moving into a great true DevOps experience here. So starting out fresh brand new online banking application,being released very frequently and trying to have test data associated with all of the automated testing and everything that needs to take place for,they weren’t work with the old solutions of some QTP or some manual interventions.
Moving away from Cordy releases to doing code commits multiple times a day, running multiple thousands of scripts to dwell on some of those where data is not available,manual process that was in place could not provide the data out there in time. They are huge challenges. It wasn’t repeatable. We had to go back through redo. All the manual efforts each time as one minor scenario changed out there. It’s a very painful process especially as we’re trying to get release after release out and code commit out as we move through the environments.
There is the background on what route we went with our online banking. In order to everybody containerize everything,we have opportunity spin up some open shift,we dynamically generate some pods within there based on what our volume and what the number of transactions per second or users are out there. We implemented ZB labs for our release and deploy. We brought all the new technology into our old solution.
We had the release individual coming to us and saying I need data so I can execute my eight to ten thousand automated scripts out there. No problem. We’ll get back to you in a week. We will commit a code tonight. We need to run these tests now so we can see the quality of the code. It went through very briefly. I think we’re at 9500 test cases at this time through. Somebody stepped on some data. Data wasn’t accurate for what we needed.
Based on those challenges,one of the problem we see is manuals so there are a lot of manual activities. Multiple testing teams are doing very similar and stuff. That manual activity is creating the inefficiency within the organization itself. To improve that,automation is one of the case.
We’re looking at what are the service delivery model from a testator perspective needs to be brought in so that you can take away those inefficiencies from the organization. In terms of characteristics of those,we’re looking at standardization. Standardization means one of the issue as Matt’s mentioned.
It means reusability. Reusability cannot come unless you have standardization. Standardization of those services which can be consumed by multiple testing teams is one of the key activity your test data delivery architecture. This is the second part automated delivery we talked about. The third part is it needs to be integrated with your solution sdlc lifecycle itself.
Integrate ability is very important. I’m sure you are already trying to have the secure data already. But this is one of the major aspect as well. It is generally the thing we are going to do. We are going to do sub setting and masking and all those capabilities. That’s part of it. The fifth thing is actuality. You need to make sure that it’s agile so it brings the agility within your organization. They’re decoupling.
Generally speaking,the organizations have test cases and test data coupled with each other so you have this particular test case. It means this particular account number. You are tying your test data with the test case which is not agile which is not given agility. You cannot run in the test anymore.
The tech doesn’t need that particular account and it needs based on that particular data criteria. Decoupling is one of the other aspects which can bring the agility in song. We’re dead. The kind of solution we were looking for is don’t hard-code the data. We talked about standardization as well. The test data requirement to the test case mapping is very important include the precondition data in my other talk. I was talking about precondition data. It is the most difficult part.
That should be included in provisioning of that. It should be included as part of your environment provisioning itself. From an automation perspective,you can standardize on gherkin. That’s where you were going towards. If you have a JSON based data which is hard-coded,to support your automation,the TDM platform needs to be able to dynamically generate the JSON files so that it can be requested at the time when the execution is happening in conjunction with the data and in the backend databases as well.
You know about JSON already. That’s from my automation perspective. That’s where we can see a TDM comes into play and the solution oriented approach of that. You must have seen this before. Bringing the data from production is one thing. It’s part of the activity. It’s a check mark. But how do I make these testing teams consume that data better? It is the most important part of your TDM strategy.
It is where the service catalog comes into play. You can create set of services. They are entirely data generation services which eventually go and feed into your automation scripts. It’s a fine and reserve capability so that people don’t step on each other for toes or to find something on one and one and be able to inject it to the other environment which we call it copy and clone.
It’s not copy and clone of the database,it’s copying porn for business objects. Those are the different services which you can provide and you can stand at the set of those standardized services. They are used for utilizing the screen dry side of services. We have talked about the value per perspective so I’ll skip this so we’ll go into the solution on how you are claiming demanded. That sounds good. We will switch one more time.
It is very similar to the diagram that I put up there before. However we’re eliminating all the manual intervention. As a code commit comes in,Jenkins will kick off the jobs and identify what data needs to be out there. In theory it’s great if we can reserve data. But even I reserved it doesn’t mean somebody’s not going to step on it. It’s going to cause a couple thousand of my scripts to fail out there. Our approach on this is when we need data that doesn’t meet the requirements out there. I am going to trigger the rest service within CASTDM product.
I am going to introduce the 8 or 10 different applications. This is an online banking system. There’s no data in online bank. We have a couple tables say whether you’re eligible to enroll in online banking and whether you know your password needs. Our challenge out there are all of the downstream applications that online banking is depending on.
Do they have a credit card? Are their credit cards in arrears? Do they have a DDA or checking account? Is there a trust account with them. all of that information is needed in order to do all of this testing out there? Then rest service goes out and hits all of these other applications. It identifies what data meets the individual tests needs,packages it back up in a JSON,gives it back to Jenkins and then Jenkins uses that to kick off the selenium grid jobs that ultimately run the test on my banking application.
We’re able to execute 10,000 scripts 9,500 technically. But I’m going to go with 10,000 from this point forward in under 10 minutes. Tedium gets us the data to make sure it’s valid,conditions if it needs to,gets it back into JSON,kicks it back over,tests executes.
We don’t have that manual intervention in between where these tests fail because my data was wrong. As we’re talking about the integration of rest services,we are looking forward to the expansion and TDM for additional rest services but our integration points are using JSON files. We were using gherkin to write all the test scripts and cases out there. They are automatable.
I need data. This is outside of our online banking automation. Do we already have something out there? Is there a fabulous portal in place? We have about 60 different tiles in place for different applications that if you need data in order to do functional testing or anything else that may be needed,you can check the portal. We don’t want to get involved.
We don’t want to have to manually go out and create something for you. If it is something that is either missing or an enhancement that needs to take place,the testator Management Group will go out and create enhancements. Our goal is not to provide one-off data but to incorporate that in and make itself serviceable so you don’t need to come back the second time when you say I need this data again and I need to find some sequel statements out there. I have the disclaimer on the side.
There’s something that whether it’s a B Sam file that we don’t have connected to. You need data quickly. If it’s something that is completely one-off,that’s not worth our effort in order to do it because the application is going to be retired. We still have a fair load of QTP or whether they’re our own individual queries that we write out there. It always happens.
Items always come up last-minute. Some of our most popular portal items are out there. It is very small representation of what the tiles look for. It is financial institution. We are very concerned and interested in who has access to what data regardless of what environment it is,whether it’s disguised or not. We have access provisioning associated with this. If you need data within one of our customer information exchanges or data warehouses or something along those lines,you’ll put through an access request. You’ll be provisioned to have only access to that project or those tiles that you need.
Then you’ll be able to go out and serve yourself. We’re about a year and a half into the TDM experience with CA. We’ve done a significant number mounted work out there in order to get the success that we’ve had. In general it was 90 percent manual creation in the past. With goal we want to get that down to 25 because we’re never going to be able to get everything. There’s always going to be a one-off. There’s always going to be something that people are looking for. Self service is 99%. It was not self service. We had one or two items out there.
We have one or two people out there that could service themselves previously but nothing along those lines. I want to make sure that we get the majority of the people self serviceable so we’re not fighting fires. After a year we got down to about 40% of the data that we have to create out there. The outstanding items are still applications that we haven’t made the investment to connect to.
We haven’t had an opportunity to get there. But I’m confident we’ll reach our goal about 25 percent that are not a part of the CATDM solution. We’re already at 65 percent in self servicing. We are confident that we can beat our goal of 45 percent. I don’t like to have to deal with people so if I can get them in getting something set up for them,we’re good. There’s always something that somebody missed.