August 18th, 2017 by blogadmin No comments »

Have you ever wondered, “Where do I start with SAP S/4HANA?” There are four strategies you can begin immediately that will pay off with a smoother deployment.

These strategies are written with new SAP deployments in mind. For organizations already running SAP ERP and converting it to SAP S/4HANA, the strategies would be a bit different.

Prepare Your Business Users

Getting business users involved is as important as any technical aspect of the project. This is because SAP S/4HANA is not merely ERP running in-memory. SAP S/4HANA uses a simpler data model to transform business processes. For example, there is no more data reconciliation between finance and controlling in the financial period-end close, ending the most tedious and error-prone part of the entire process. This is a major productivity win for finance, of course, but it is still a change and one they need to know up front.

Financial close improvements are just one example. Business Value Adviser can help you understand the many other process improvements. Also, most successful SAP S/4HANA projects begin with a prototype, often running inexpensively in the cloud on a trial system.

Prepare Your Data

SAP is ready with a complete set of data migration tools including templates, data mapping, and data cleansing capability. You can start investigating the data mapping right away. Since SAP S/4HANA is built on a simpler data model and has fewer tables, getting data into SAP S/4HANA is easier than with other ERP systems.

You should also decide how much historical data you want to include. You can reduce cost by using data aging so that only the most useful data is stored in memory while the rest is available on disk-based systems.

Organize the Deployment Team

Organizations new to SAP have nothing to decide when it comes to the deployment path. You set up a new SAP S/4HANA deployment and migrate data from the legacy system. Organizations already running SAP ERP have more to do at this point, especially if converting their system to SAP S/4HANA.

Instead, focus on the deployment team, perhaps bringing SAP experts on board through hiring or teaming-up with an SAP partner. The most successful deployments do initial workshops for functional planning, setup prototype and test systems, and start getting end user feedback early on.

The deployment team should also familiarize themselves with SAP Activate, for the latest best practices and guided configuration.

Determine the Deployment Destination

The move to SAP S/4HANA is an ideal time to bring ERP into your cloud strategy. Since it is likely that an organization new to SAP does not have SAP HANA expertise, this makes SAP S/4HANA a prime candidate to run in the cloud.

Perhaps a more accurate term, though, would be clouds. You have a complete choice of deployment with SAP S/4HANA, including public cloud, IaaS (Amazon, Azure), and private cloud with SAP or partners. On premise is an option as well, of course.

Other ERP products are completely different from one deployment option to the next, and many don’t even have an on premise or private cloud option. Whether the destination is on premise or clouds, SAP S/4HANA uses the same code line, data model, and user experience, so you get the consistency essential to hybrid or growing environments. This means that instead of supporting disparate products, IT spends more time on business processes improvement.

Source: All the above opinions are personal perspective on the basis of information provided by SAP on SAP S/HANA




QA trends to be aware of in 2017

August 18th, 2017 by blogadmin No comments »

Based on recently published World Quality Report data and the most frequent QA demands from customers, below is a compiled list of QA trends that are going to make a difference in 2017. All of them are worth attention, as they can bring tangible benefits to QA vendors who build them into their service lines.

Test Automation

This is the undeclared king of QA services and remains the best way to speed release cycles, quickly test fixes, and rapidly evolve code in order to catch defects and not delay deployment. There is a still lot of room for improvement as many companies run manual testing, and are just now starting to think of adopting test automation.

Test automation is not an easy undertaking. It is quite challenging for many vendors and should be precisely tailored to the demands of every customer. It’s not a rare case when we have to deal with an already developed test automation solution that doesn’t fit the customer’s specific business context and has to be redeveloped.

Also, what we face is an increasing number of potential customers who worry their in-house QA team may be unable to handle the newly created solution. We understand their worries, and to comfort them a behavior-driven development (BDD) approach can be applied. It requires no script-writing skills and enables the QA team to more easily handle the automated tests.


Script-less test automation will take on greater importance and test automation will keep going far beyond functional testing. It will provide opportunities for software testers to hone their full-cycle automation skills and not simply enhance their functional testing abilities.

Internet of Things

85% of World Quality Report participants say internet of things (IoT) products are part of their business operations. Most commonly, IoT devices and apps are tested for security, performance and usability.

Less frequently we test such aspects as compatibility, interoperability and resource utilisation, but they also matter to ensure flawless user experience.

To guarantee deep test coverage, we insist our testers should not just validate the device itself or its connection properties, but also to think outside of the box to check the most unthinkable, and rare scenarios, as the domain expertise is no longer enough for comprehensive testing of IoT products.


If you put on an Apple Watch, open the heart rate screen and then take the device off and place it on a pair of jeans or on a towel (fabric matters), the pulse rate reaches 200 beats per minute. Too much for a towel!

Is this situation real? And how should it be treated, as a bug or as functionality?

In fact, quite real: the user finishes his training in the gym, takes the device off and puts it on a towel. But to predict such a scenario a tester should act as a real user and test the device in real life.


In the near future, the “out-of-the-box thinking” problem will be solved by means of artificial intelligence solutions. If designed well, they’ll provide real-time monitoring and analytics of IoT products.

Big Data

The digital revolution has led to the rise of big data. Large companies frequently ask for strategies to test big data systems that appear to be too large in volume to be managed in traditional ways.

The most frequent issues are not having enough storage space to back up test data, or not being able to manage the data on a single server. What we pay attention to when working on big data system quality assurance is the importance of verifying data completeness, ensuring data quality and automating regression testing. A tester has to be surgical about testing without taking a force approach.


There will likely be new and innovative ways, methods and techniques to provide big data testing. Test automation will also be widely applied, as there is simply too much data and too little time to manually drill down into all the data.

New Service Types

Alongside traditional services, QA vendors are working to develop new services that will bring value to customers and help them gain a competitive advantage on the market. For example, our company has developed a baseline-testing offer. What is the idea? Baseline testing includes estimation of the application’s quality level in general and a roadmap to increasing its overall quality.

QA consulting and test advisory services are also gaining popularity. They’re needed among those customers who want to develop from scratch or improve in-house testing strategies rather than outsource testing needs on a regular basis.

Also, a vast number of testing service providers offer to establish a corporate Testing Center of Excellence. TCoE specialists run in-house tests and expand competencies of internal specialists.


With traditional software testing services offered by many companies, QA vendors will think of new service lines to stay ahead of competitors.

Security Testing

It seems that trends are not only about emerging novelties. It’s also about popular topics. Security testing is one topic that will never go out of style. What QA providers should be preparing for now is to handle a steady increase in systematic testing of any type of software products, and to provide staff augmentation to enhance security testing and products development life cycle.

Mobile apps security is a significant field as the number of mobile devices and applications we download grows rapidly alongside with the number of attacks. Seemingly, the demand for security testing of mobile applications will also increase due to the large number of applications working with users’ personal data.

The importance of security testing of IoT products will also increase in 2017. The vulnerability of the IoT developments manifested itself when the Mirai botnet came up in 2016. It has been utilised by hackers to launch high-profile, DDoS attacks against different Internet properties and services. While there are mechanisms to mitigate DDoS attacks, there is no way to find out what the next target will be. Again, users should be aware of the threat of using simple passwords and opening devices to remote access. Security specialists should continuously expand their competencies working with all novelties on the market.

The importance of cloud computing security will also increase. More and more companies are resorting to the use of cloud-based solutions due to ease of use and the opportunity to quickly scale the architecture when needed. At the same time, cloud infrastructure is very attractive to any attacker, because it gives access to all company’s resources and personal information.


To stop the vulnerability trend from becoming a reality in 2017, users, mobile app developers and software testers should join forces. Users have to become smarter downloaders and learn not to share personal data with every app installed; developers should follow, at the least, basic security practices; while testing engineers should be able to identify threats to the app and help develop countermeasures.

Source: All the above opinions are personal perspective on the basis of information provided by Software Testing News



Maximize the Value of Cloud ERP with SAP S/4HANA Cloud

August 4th, 2017 by blogadmin No comments »

Today’s business is moving faster than ever thanks to digital technologies. In this environment, gaining a digital edge means gaining the ability to focus on what matters most to the future of your business and your customers so you don’t fall behind.

In this discussion with early adopters of SAP S/4HANA Cloud, it’s become clear why intelligent cloud ERP is at the core of their digital value creation.

Intelligent cloud ERP is not coming — it is here. It’s ready to be adopted, consumed, and built on.

In the cloud, SAP is going from the system of record to the system of innovation. Built on 45 years of experience with best practices, and translated that into a set of capabilities only possible with the cloud.

What that means for customers is access to a system that is fast to implement, easy to use, free from infrastructure maintenance, built to provide the highest process standards, and constantly upgrading to offer the latest in machine learning and other innovations. Time to value is dramatically reduced, innovation is delivered by SAP continuously and customers and partners are enabled to deliver innovation themselves.

Three themes emerged in the discussion to show why customers are finding a competitive edge and generating new value with our intelligent cloud ERP solution.

Speed is a Strategic Advantage

Business is moving incredibly fast, which is why it’s essential that you have a rapid-to-adopt intelligent core ERP. Intelligent cloud ERP enables you to quickly implement your new system across the organization and bring your core processes on board.

Beyond adoption, cloud also enables fast upgrades. We are pushing out regular quarterly upgrades and releases per year, providing customers with the latest technologies. This velocity of upgrades and new capabilities isn’t possible with an on-premises adoption.

And there is another aspect of speed customers can’t live without: the speed to see data in real-time and act on it. Real-time architecture provides data (knowledge) — and also the tools to act on it quickly.

Standardization to Best Practices is Critical

SAP S/4HANA Cloud is based on a Fit-to-Standard model that embeds best practices across the company. This model brings processes into alignment based on decades of experience and unique knowledge of effective and efficient core business processes. It’s a major break from the complex, customized ERP implementations of yesterday.

Customers tell us the Fit-to-Standard model helps them understand what they need to do and how to accomplish processes more efficiently.

According to Richard St-Pierre, president of C2 International, “The key value a cloud ERP unlocks is a simple way to operate. Especially for a smaller or midsize organisation, with at all the legal, encryption and backup requirements. It’s simply overwhelming for a small shop that doesn’t have a large IT team.  It is becoming way too complex for a company to operate. So, a cloud-based solution like S/4HANA Cloud is really the only option. The alternative simply requires too many resources that we don’t have.”

With a standardized core, they can move at higher velocity and take advantage of new technologies that differentiate their services, rather than focusing energy on creating customized solutions to common industry processes.

Simplicity Powers Business Success

One of the most popular innovations within SAP S/4HANA Cloud is the consumer-grade usability. Executives are confident that they can speed adoption internally with the easy-to-use interface, and for businesses with a distributed workforce, the full mobile capabilities increase the value of the solution exponentially. Instead of having to search out tasks that need attention, users have actions pushed to them, just as they recognize with a Twitter or podcast notification.

More strategically, the simplicity of SAP S/4HANA Cloud means customers don’t focus on infrastructure. Instead, they focus on the business, with the confidence that they’ll be supported on a regular basis — and receive quarterly updates that make the solution ever more valuable. Intelligent cloud ERP grows with you, whether growth is in scale or business function, without a team maintaining infrastructure and implementing potentially disruptive updates.

Intelligent Cloud ERP is Here Now

Competitors are already adopting a solution that provides greater speed, simplicity, and standardized best practices, and receiving regular infusions of the latest in machine learning and analytics capabilities. Don’t fall behind with outdated technology or an outmoded approach. Instead, learn how you can create value on the new intelligent cloud ERP.

Source: All the above opinions are personal perspective on the basis of information provided by SAP on SAP S/HANA Cloud platform


How To Survive as a QA in a Software Development Team

August 4th, 2017 by blogadmin No comments »

It is not always easy to be a software tester in a software development team. Developers will often consider software quality assurance (QA) people as inferior and would wonder how they could question the perfection of the beautiful code that they have just written. This article, discusses some of the physical and psychological issues that software testers are facing and propose solutions to avoid them.

What is usually considered in the first place when it comes to a discussion of a successfully finished software development project? Developers’ efforts, used stack of technologies, pros and cons of the chosen development methodology, etc. But the issues which the software quality assurance (QA) team members face from day to day are usually barely mentioned. In this article we will try to shed some light on this question and consider possible solutions of the most significant problems.

The list of day-to-day disagreements that can crush the will of a QA specialist can be divided into two groups: physical and psychological aspects. Physical aspects are caused by the peculiarities of the industry and describe the trends characteristic for most companies. Psychological aspects mostly belong to the sphere of interpersonal communication. You may or may not face them during your professional activities. It depends on the experience of your colleagues, features of project management, and other aspects specific to a specific company.

Physical Aspects: QA to Developers Ratio and Gender Imbalance

This section contains the issues associated with the software development industry itself. Even if you don’t have any experience as a member of a software testing company, you can predict some of the issues by analyzing the available statistical data. For example, let’s take a look at the ratio of testers to developers. Choosing the proper number of the QA team members depends on many aspects. Not all companies follow the Microsoft’s golden rule: “Keep developers to testers ratio equal to 1:1” and the most common ratio is 1 tester to 3 developers. It could even be 1 tester to 10 developers in some cases. The point is that in the most optimistic scenario, a QA specialist will have to handle the code written by three different developers. As a minimum. If the workflow was not adequately planned, you could find yourself overwhelmed with tasks very quickly. It can lead to a decline in productivity, stress, and frustration. Some software development approaches, like Scrum for example, imply regular meetings that can help discuss what has been done, what needs to be done, and what problems does the team have. This Agile context is a good chance for a software tester to attract attention to a great many tasks he or she has. But in the case of larger V model or Waterfall based projects that don’t imply regular meetings by design, there should be a mechanism for communication between teams. The project manager has to ensure that there are no unspoken opinions and QA team members are free to discuss the problems they face as well as ideas to solve them.

The next issue is related to the gender imbalance in the IT industry. According to the statistics, developers are mostly men and the world of QA is represented mainly by women. This situation can lead to different issues. The most obvious consequence is that the relationships between these teams can go far beyond professional etiquette. The problem can take different forms. The most innocuous of which is the common difficulties of communication between groups of males and females. Inappropriate behavior and flirting are fraught with more serious consequences and can influence the psychological climate inside the company. Strict company policy regarding inappropriate behavior in the workplace should be brought to the attention of all employees.

Psychological Aspects: “It Works on my PC” and why Team Building is Important

From time to time every QA team member faces the situation when a developer or a manager disagrees that a detected bug is a bug despite all the evidences. The arguments may vary. The most common situation usually looks like this. You come to the developer and describe the bug that you have just detected. But instead of running to his working place and fixing everything, he makes a helpless gesture and tells you: “But everything works fine on my PC!” Attempts to convince him to your point of view may spoil the relationships within the team and complicate further work on the project. It may look like a complex psychological issue, but it may have a relatively simple technical solution. Make sure that QA and development teams run the same environment. This approach can help to avoid the above problem.

Despite the fact that the peculiarities of the profession associated with solving technical issues, non-tech problems can be pretty difficult to overcome. Since you work with other people you should always remember that the impact of human factors such as subjective judgment can’t be underestimated. For example, what should you do when the project manager is so intended to finish the project as soon as possible, so he insists that a detected bug is not a bug at all and there are no reasons to spend extra time and efforts on solving it? Strict specifications and knowledge of products of large companies with similar functionality are magic pills that can prevent the possibility of discrepancies. Indeed, if your company has clearly defined what is a bug and what is a feature, then you don’t even have to convince anyone of your conscientiousness and can be sure that the work is not done in vain. When you know exactly what your potential competitors have to offer to their customers and there is a bug that can break the functionality of your product and deprive you of a competitive advantage, ignoring it can negate all efforts. Any manager in such situation will treat the opinion of the testing team with due attention.

The burden of responsibility for the quality of the released product is usually on the QA team. Unfortunately, in most cases, the ideas for improving the workflow proposed by testers are ignored. You can imagine what psychological pressure can be caused by the described situation. You must release the product in a limited time, you have no influence on how everything is done, and at the same time, you are considered responsible for its quality. It is an unpleasant situation indeed. The power to resist such challenges usually come along with the experience. If you can’t change the situation, you have to adapt to it. Even if you have to work on a tight deadline, don’t rush. Spend as much time as you need to prioritize code checking and depth of code coverage correctly, and you will be able to avoid undesirable consequences.

In most cases, the team of developers keeps apart from the QA team. Developers stick together, share common interests, and keep the distance from a person that looks at their code as a pile of bugs. Taking into account the fact that the overall number of testers in the company is lower than the number of developers, a QA team member can feel himself as an outcast sometimes. It can lead to a situation when testers and developers perceive each other as members of different castes but not the parts of a common mechanism. Various forms of team building activities can correct the situation. We do not necessarily talk about something costly. The key goal of team building is learning to solve problems together. In order to do this, you don’t have to climb the mountain along with your colleagues. This sounds exciting, but is not necessary. There are a lot of activities that can be held in your own office and will not take more than 20 minutes. Show some creativity and you will find dozens of ways of creating a team spirit.


The climate within a software development company is a pretty sensitive theme. The IT industry is a heterogeneous environment, and there is no single solution that can fit all the companies. Creating a good team spirit is a job that requires adaptation to conditions and flexibility. The desire to show one’s will despite everything will rather harm than benefit. We hope that combining the approaches presented in the article, you will be able to find your own and unique way of building the software development dream team.

Source: All the above opinions are personal perspective on the basis of information provided by Software Testing Magazine



SAP has moved beyond ERP

July 11th, 2017 by blogadmin No comments »
Majority of people who are aware of SAP think that SAP is just an ERP solution, but it is not true. Over the period of last few years SAP is transformed itself. SAP is now providing Cloud Computing and Digital Solutions. Presently more than 50% of comes from non-ERP.
HANA (High performance analytic appliance) was launched by SAP sometime in 2010 and it has been a very successful tool so far. The biggest benefit of this tool is realised by senior management as the provides real time business information which is essential for prompt decision. As we all know that SAP is a costly solution due to very high licencing fees. The cloud version of HANA has resolved this issue as well by offering “Pay per Use” options.
Besides this SAP has also acquired many cloud-centric solutions like Hybris, SuccessFactors, Ariba, Concur and Fieldglass. You will be surprised to know that presently  SAP has more than 200 million users on the cloud.
The core ERP solution is now called S/4HANA. This solution can be implemented within premises or on the cloud. This has come as a big relief for MSME organisations who find it difficult to implement SAP due to huge license cost. Companies are slowly migrating  from traditional ERP to S4/HANA.
Though there are many ERP solutions in India market but SAP is market share is more than 50% and there is a big demand for SAP skilled resources in India. One of the biggest source of demand is due to ongoing “Support and Maintenance” activities and rollout of GST. Incorporating GST changes in every IT solution has created a big demand for ERP skilled resources.
SAP Digital Boardroom which is primarily used to simplify and provide the performance reporting of all the business areas in real time. It has got fully automated business intelligence capabilities.
SAP is also investing a lot of money in machine learning, Blockchain innovations and IoT. I am sure something new will emerge very soon.
SAP is market leader in ERP in India and has got huge potential for job creation. If you are a part of business cycle and want to pursue your career in SAP consultancy then it is the right time to get yourself trained in SAP.

Source: Blog by RK Bajpai



Big Data Learning Path for all Engineers and Data Scientists out there

July 7th, 2017 by blogadmin No comments »


The field of big data is quite vast and it can be a very daunting task for anyone who starts learning big data & its related technologies. The big data technologies are numerous and it can be overwhelming to decide from where to begin.

This is the reason I thought of writing this article. This article provides you a guided path to start your journey to learn big data and will help you land a job in big data industry.  The biggest challenge we face is identifying the right role as per our interest and skillsets.

To tackle this problem, I have explained each big data role in detail and also considering different job roles of engineers and computer science graduates.

I have tried to answer all your questions which you have or will encounter while learning big data. To help you choose a path according to your interest I have added a tree map which will help you identify the right path.

 Table of Content

  1. How to get started?
  2. What roles are up for grabs in the big data industry?
  3. What is your profile, and where do you fit in?
  4. Mapping roles to Big Data profiles
  5. How to be a big data engineer?
  • What is the big data jargon?
  • Systems and architecture you need to know
  • Learn to design solutions and technologies
  1. Big Data Learning Path
  2. Resources


  1. How to get started?

One of the very first questions that people ask me when they want to start studying Big data is, “Do I learn Hadoop, Distributed computing, Kafka, NoSQL or Spark?”


Well, I always have one answer: “It depends on what you actually want to do”.


So, let’s approach this problem in a methodical way. We are going to go through this learning path step by step.

  1. What roles are up for grabs in the big data industry?

There are many roles in the big data industry. But broadly speaking they can be classified in two categories:

  • Big Data Engineering
  • Big Data Analytics

These fields are interdependent but distinct.

The Big data engineering revolves around the design, deployment, acquiring and maintenance (storage) of a large amount of data. The systems which Big data engineers are required to design and deploy make relevant data available to various consumer-facing and internal applications.

While Big Data Analytics revolves around the concept of utilizing the large amounts of data from the systems designed by big data engineers. Big Data analytics involves analyzing trends, patterns and developing various classification, prediction & forecasting systems.

Thus, in brief, Big data analytics involves advanced computations on the data.  Whereas big data engineering involves the designing and deployment of systems & setups on top of which computation must be performed.

 3.What is your profile and where do you fit in?

Now, we know what categories of roles are available in the industry, let us try to identify which profile is suitable for you. So that, you can analyze where you may fit in the industry.

Broadly, based on your educational background and industry experience we can categorize each person as follows:

  • Educational Background

(This includes interests and doesn’t necessarily point towards your college education).

  1. Computer Science
  2. Mathematics
  • Industry Experience
  1. Fresher
  2. Data Scientist
  3. Computer Engineer (work in Data related projects)

 Thus, by using the above categories you can define your profile as follows:

Eg 1: “I am a computer science grad with no experience with fairly solid math skills”.

You have an interest in Computer science or Mathematics but with no prior experience you will be considered a Fresher.

Eg 2: “I am a computer science grad working as a database developer”.

Your interest is in computer science and you are fit for a role of a Computer Engineer (data related projects).

Eg 3: “I am a  statistician working as a data scientist”.

You have an interest in Mathematics and fit for a role of a Data Scientist.

So, go ahead and define your profile.

(The profiles we define here are essential in finding your learning path in the big data industry).

  1. Mapping roles to profiles

Now that you have defined your profile, let’s go ahead and map the profiles you should target.

 4.1 Big Data Engineering roles

If you have good programming skills and understand how computers interact over the internet (basics) but you have no interest in mathematics and statistics. In this case, you should go for Big data engineering roles.

4.2 Big data Analytics roles

If you are good at programming and have your education and interest lies in mathematics & statistics, you should go for Big data Analytics roles.

  1. How to be a big data Engineer?

Let us first define what a big data Engineer needs to know and learn to be considered for a position in the industry. The first and foremost step is to first identify your needs. You can’t just start studying big data without identifying your needs. Otherwise, you would just be shooting in that dark.

In order to define your needs, you must know the common big data jargon. So let’s find out what does big data actually means?

5.1 The Big Data jargon

A Big data project has two main aspects –  data requirements and the processing requirements.

  • 5.1.1 Data Requirements jargon

Structure:  As you are aware that data can either be stored in tables or in files. If data is stored in a predefined data model (i.e has a schema) it is called structured data. And if it is stored in files and does not have a predefined model it is called unstructured data. (Types: Structured/ Unstructured)

Size:  With size we assess the amount of data. (Types: S/M/L/XL/XXL/Streaming)

Sink Throughput: Defines at what rate data can be accepted into the system. (Types: H/M/L)

Source Throughput: Defines at what rate data can be updated and transformed into the system. (Types: H/M/L)

  • 1.2 Processing Requirements jargon

Query time: The time that a system takes to execute queries. (Types: Long/ Medium /Short)

Processing time: Time required to process data (Types: Long/Medium/Short)

Precision: The accuracy of data processing (Types: Exact/ Approximate)

5.2 Systems and architecture you need to know

Scenario 1: Design a system for analyzing sales performance of a company by creating a  data lake from multiple data sources like customer data, leads data, call center data, sales data, product data, weblogs etc.

5.3 Learn to design solutions and technologies

Solution for Scenario 1: Data Lake for sales data

(This is my personal solution, you may come up with a more elegant solution if you do please share below.)

So, how does a data engineer go about solving the problem?

A point to remember is that a big data system must not only be designed to seamlessly integrate data from various sources to make it available all the time, but it must also be designed in a way to make the analysis of the data and utilization of data for developing applications easy, fast and always available (Intelligent dashboard in this case).

Defining the end goal:

  1. Create a Data Lake by integrating data from multiple sources.
  2. Automated updates of the data at regular intervals of time (probably weekly in this case)
  3. Data availability of analysis (round the clock, perhaps even daily)
  4. Architecture for easy access and seamless deployment of an analytics dashboard.

Now that we know what our end goals are, let us try to formulate our requirements in more formal terms.

  • 3.1 Data related Requirements


Structure: Most of the data is structured and has a defined data model. But data sources like weblogs, customer interactions/call center data, image data from the sales catalog, product advertising data. Availability and requirement of image and multimedia advertising data may depend on from company to company.

Conclusion: Both Structured and unstructured data

Size: L or XL (choice Hadoop)

Sink throughput: High

Quality: Medium (Hadoop & Kafka)

Completeness: Incomplete

  • 5.3.2 Processing related Requirements

Query Time: Medium to Long

Processing Time: Medium to Short

Precision: Exact

As multiple data sources are being integrated, it is important to note that different data will enter the system at different rates. For example, the weblogs will be available in a continuous stream with a high level of granularity.

Based on the above analysis of our requirements for the system we can recommend the following big data setup.


Blog 1

  1. Big Data Learning Path

Now, you have an understanding of the big data industry, the different roles and requirements from a big data practitioner. Let’s look at what path you should follow to become a big data engineer.

As we know the big data domain is littered with technologies. So, it is quite crucial that you learn technologies that are relevant and aligned with your big data job role. This is a bit different than any conventional domains like data science and machine learning where you start at something and endeavor to complete everything in the field.

Below you will find a tree which you should traverse in order to find your own path. Even though some of the technologies in the tree are pointed to be data scientist’s forte but it is always good to know all the technologies till the leaf nodes if you embark on a path. The tree is derived from the lambda architectural paradigm.

Blog 2

With the help of this tree map, you can select the path as per your interest and goals. And then you can start your journey to learn big data.

One of the essential concepts that any engineer who wants to deploy applications must know is Bash Scripting. You must be very comfortable with linux and bash scripting. This is the essential requirement for working with big data.

At the core, most of the big data technologies are written in Java or Scala. But don’t worry, if you do not want to code in these languages you can choose Python or R because most of the big data technologies now support Python and R extensively.

Thus, you can start with any of the above-mentioned languages. I would recommend choosing either Python or Java.

Next, you need to be familiar with working on the cloud. This is because nobody is going to take you seriously if you haven’t worked with big data on the cloud. Try practicing with small datasets on AWS, softlayer or any other cloud provider. Most of them have a free tier so that students can practice. You can skip this step for the time being if you like but be sure to work on the cloud before you go for any interview.

Next, you need to learn about a Distributed file system. The most popular DFS out there is Hadoop distributed file system. At this stage you can also study about some NoSQL database you find relevant to your domain. The diagram below helps you in selecting a NoSQL database to learn based on the domain you are interested in.

The path until now are the mandatory basics which every big data engineer must know.

Now is the point that you decide whether you would like to work with data streams or dormant large volumes of data. This is the choice between two of the four V’s that are used to define big data (Volume, Velocity, Variety and Veracity).

So let’s say you have decided to work with data streams to develop real-time or near-realtime analysis systems. Then you should take the Kafka path. Else you take the Mapreduce path. And thus you follow the path that you createDo note that, in the Mapreduce path you do not need to learn pig and hive. Studying only one of them is sufficient.

In summary: The way to traverse the tree.

  1. Start at the root node and perform a depth-first traversal style.
  2. Stop at each node check out the resources given in the link.
  3. If you have decent knowledge and are reasonably confident at working with the technology then move to the next node.
  4. At every node try to complete at least 3 programming problems.
  5. Move on to the next node.
  6. Reach the leaf node.
  7. Start with the alternative path.

Did the last step (#7) baffle you! Well truth be told, no application has only stream processing or slow velocity delayed processing of data. Thus, you technically need to be a master at executing the complete lambda architecture.

Also, note that this is not the only way you can learn big data technologies. You can create your own path as you go along. But this is a path which can be used by anybody.

If you want to enter the big data analytics world you could follow the same path but don’t try to perfect everything.

For a Data Scientist capable of working with big data you need to add a couple of machine learning pipelines to the tree below and concentrate on the machine learning pipelines more than the tree provided below. But we can discuss ML pipeline later.

Add a NoSQL database of choice based on the type of data you are working with in the above tree.

Blog 3

As you can see there are loads of NoSQL databases to choose from. So it always depends on the type of data that you would be working with.

And providing a definitive answer to what type of NoSQL database you need to take into account your system requirements like latency, availability, resilience, accuracy and of course the type of data that you are dealing with.

Source: All the above opinions are personal perspective on the basis of information provided by Analytics Vidhya




Which Software Testing Career Path and Certification is Right for You?

June 28th, 2017 by blogadmin No comments »

Are you wondering which ISTQB certification is right for you? The following is a short explanation of the various certifications and the way you might want to proceed based on your career goals. The tricky thing in our industry is the constant change. The skills of today may or may not be marketable tomorrow, so while you’re thinking about what you want to do, you should also consider what you need to do to be able to get or retain the job you want.

Foundation Level – This is where you need to start. This is the gateway certification for all the other ISTQB certifications. This level is designed for beginners all the way up to those who have been in the industry for a while (or maybe a long while) and need to brush up their skills and update their terminology. One of the fastest ways to fail in an interview is to use the wrong terms for testing processes, documents and techniques. Organizations tend to adopt their own terminology and it helps to have a base of “standard” terminology, particularly before you venture out for an interview.

The Foundation Level is in the process of being expanded to include several extension modules. Right now, the agile extension is due to be available in early 2014 and work is starting on the model-based testing extension. These are separate certifications you can get that are “added on” to your Foundation Level certification.

Advanced Level – This is where you need to start making decisions. What do you like to do? What do you want to do? Where are the most opportunities?

Advanced Level – Test Analyst – If you are not very technically minded, and would rather work with the user, know the business application and apply your skills more for analysis than programming, you want to pursue the Advanced Test Analyst certification. This certification is designed for the strong tester who has a deep understanding of the business domain and the needs of the user. You’ll learn about designing good test documentation, conducting effective reviews, and participating in risk analysis sessions (particularly to help determine the impact of a realized risk to the business user). You’ll learn about how you can contribute the test information (input data, action, expected results) to the test automation effort and you’ll learn about usability testing. You’ll also build upon the test techniques you learned at the Foundation Level and will learn new techniques such as domain analysis and cause-effect graphing, as well as how to test using use cases and user stories. You’ll learn more about defect-based and experience-based techniques so you’ll know how to pick an appropriate defect taxonomy and how to implement traceable and reproducible exploratory and checklist-based testing. Let’s not forget process improvement as well. You’ll learn what to track in your defect management to be sure you have the information to figure out what could be improved in your process and how you can do it. This certification is designed for the person who wants to spend their time testing, not programming or delving into the code or troubleshooting technical issues.
The path for the Advanced Test Analyst at the Expert Level will include a further specialization in usability testing and further development of testing techniques. At this point, these new syllabi are being discussed but will not be available until at least 2015.

Advanced Level – Technical Test Analyst – OK, admit it, you really like to play in the code. You like to review it, program tests to test it and create test automation and tools. If this is describing you, you definitely need to be looking at the Advanced Technical Test Analyst certification. This certification is designed for the technically minded individual who wants to and is capable of programming, both in scripting languages (e.g., python) as well as standard programming languages (e.g., java). You’ll learn how to approach white-box testing to find the difficult problems that are often missed by the black-box testing that is usually done by the Test Analyst. You will learn strong testing techniques that will allow you to systematically test decision logic, APIs and code paths. You will also learn about static and dynamic analysis techniques and tools (stamp out those memory leaks!). You will learn about testing for the technical quality characteristics such as efficiency (performance), security, reliability, maintainability, and portability. You’ll learn how to do effective code and architectural reviews. And, you’ll learn about tools – using tools, making tools, and a little about selecting the right tools. After all, you wouldn’t want to accidentally get a tool that creates code mutants (really, that’s a legitimate tool usage) when you really wanted a simulator. And did I mention automation? You will learn the basis for automation that will be built on at the Expert Level.

The Advanced Technical Test Analyst certification is the gateway to the Expert Level for Test Automation (Engineering) and Security. The Test Automation (Engineering) syllabus and the Security syllabus and their associated certifications are likely to be available in 2014 or early 2015.

Advanced Test Manager – Those who can, do, and those who can’t, manage? Well, that’s not usually a successful formula for a test manager. If you are a test manager or want to be one, and you are willing to learn all the necessary techniques and practices to be successful, then this certification is the one for you. You will learn all about test planning, monitoring and controlling for projects but you will also learn about establishing test strategies and policies that can change the course of testing for the organization. You will learn about how to effectively manage, both people and projects, and will learn the importance and application of metrics and estimation techniques. You will learn your role in reviews. You will learn how to effectively manage defects and how to focus on improving the test process. You will also learn the importance and proper usage of tools and be able to set realistic expectations regarding tool usage. So, if you like telling people what to do, and they tend to listen to you, this is probably the right certification for you. However, that said, remember that technical people respect technical people, so rather than just getting the Advanced Test Manager certification, you should think about also getting at least the Advanced Test Analyst certification as well.
The Advanced Test Manager certification is the gateway to the Expert Levels for Improving the Test Process and Test Management. The Expert Level Improving the Test Process certification focuses on various techniques and models that are used for test process improvement. This provides a good coverage of the most popular models and provides information regarding how to approach an improvement effort to net an effective result. The Expert Level Test Management certification focuses on honing those strategic, operational and personnel skills to make you the best test manager you can be. There is significant discussion in the syllabus about how to be sure your department is performing well and is receiving the accolades it deserves. There is also realistic information regarding managing people effectively and dealing with difficult situations.

The Advanced Test Manager certification is also a pre-requisite for the management part of the Expert Level Test Automation certification. This focuses on how to effectively manage an automation project, including getting the right tools, resources, budget and timeframe. This syllabus should be available in late 2014 or early 2015.

Which Way to Go?
It’s entirely up to you. As you can see, there are several ways you can go with the certification path. And remember, for example, you might not want to get the Advanced Technical Test Analyst certification if you are a test manager, but you can always read the free syllabus and learn something even without a big time investment. They make for interesting reading, even if you are not planning the particular career path that is indicated. Our industry is constantly changing and new syllabi are always in the works. If you plan to head for the Expert Level, it’s a good idea to start planning your path now as that may determine which Advanced certification(s) you will need. Keep an eye on the ISTQB web site for new additions to the syllabus family. And remember to train, not just for your current job, but for the next job you want to get. Right now, the job market is hot for those with the skills of the Advanced Technical Test Analyst. There is always a need for good test managers. Note the emphasis on the word “good”. And, many companies want Advanced Test Analyst’s as well because of the need for black-box testing and strong domain knowledge. Right now, the biggest growth in in the Advanced Technical Test Analyst area, but that can change quickly. Get your training now, so you’ll be ready.

It’s unlikely that we will run out of work anytime in the future because, as long as there are developers, there will be a need for testers. It’s built in job security! Plan and train for your future. It’s looking bright!

Source: All the above opinions are personal perspective on the basis of information provided by CSTB


Deep Learning vs. Machine Learning-The essential differences you need to know!

June 8th, 2017 by blogadmin No comments »


Machine learning and deep learning on a rage! All of a sudden everyone is talking about them – irrespective of whether they understand the differences or not! Whether you have been actively following data science or not – you would have heard these terms.

Just to show you the kind of attention they are getting, here is the Google trend for these keywords:


If you have often wondered to yourself what is the difference between machine learning and deep learning, read on to find out a detailed comparison in simple layman language. We have explained each of these terms in detail.

 Table of Contents

  1. What is Machine Learning and Deep Learning?
    1. What is Machine Learning?
    2. What is Deep Learning?
  2. Comparison of Machine Learning and Deep Learning
    1. Data Dependencies
    2. Hardware Dependency
    3. Problem Solving Approach
    4. Feature Engineering
    5. Execution time
    6. Interpretability
  3. Where is Machine Learning and Deep Learning being applied right now?
  4. Pop Quiz
  5. Future Trends
  6. What is Machine Learning and Deep Learning?

Let us start with the basics – What is Machine Learning and What is Deep Learning. If you already know this, feel free to move to section 2.

 1.1 What is Machine Learning?

The widely-quoted definition of Machine learning by Tom Mitchell best explains machine learning in a nutshell. Here’s what it says:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E ”

Did that sound puzzling or confusing? Let’s break this down with simple examples.

Example 1 – Machine Learning – Predicting weights based on height

Let us say you want to create a system which tells expected weight based on height of a person. There could be several reasons why something like this could be of interest. You can use this to filter out any possible frauds or data capturing errors. The first thing you do is collect data. Let us say this is how your data looks like:


Each point on the graph represents one data point. To start with we can draw a simple line to predict weight based on height. For example a simple line:

Weight (in kg) = Height (in cm) – 100

can help us make predictions. While the line does a decent job, we need to understand its performance. In this case, we can say that we want to reduce the difference between the Predictions and actuals. That is our way to measure performance.

Further, the more data points we collect (Experience), the better will our model become. We can also improve our model by adding more variables (e.g. Gender) and creating different prediction lines for them.

Example 2 – Storm prediction System

Let us take slightly more complex example. Suppose you are building a storm prediction system. You are given the data of all the storms which have occurred in the past, along with the weather conditions three months before the occurrence of these storms.

Consider this, if we were to manually build a storm prediction system, what do we have to do?


We have to first scour through all the data and find patterns in this data. Our task is to search which conditions lead to a storm.

We can either model conditions like – if the temperature is greater than 40-degree Celsius, humidity is in the range 80 to 100, etc. And feed these ‘features’ manually to our system.

Or else, we can make our system understand from the data what will be the appropriate values for these features.

Now to find these values, you would go through all the previous data and try to predict if there will be a storm or not. Based on the values of the features set by our system, we evaluate how the system performs, viz how many times the system correctly predicts the occurrence of a storm. We can further iterate the above step multiple times, giving performance as feedback to our system.

Let’s take our formal definition and try to define our storm prediction system: Our task ‘T’ here is to find what are the atmospheric conditions that would set off a storm. Performance ‘P’ would be, of all the conditions provided to the system, how many times will it correctly predict a storm. And experience ‘E’ would be the reiterations of our system.

 1.2 What is Deep Learning?

The concept of deep learning is not new. It has been around for a couple of years now. But nowadays with all the hype, deep learning is getting more attention. As we did in Machine Learning, we will look at a formal definition of Deep Learning and then break it down with example.

“Deep learning is a particular kind of machine learning that achieves great power and flexibility by learning to represent the world as nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones.”

Now – that one would be confusing. Let us break it with simple example.

Example 1 – Shape detection

Let me start with a simple example which explains how things happen at a conceptual level. Let us try and understand how we recognize a square from other shapes.


The first thing our eyes do is check whether there are 4 lines associated with a figure or not (simple concept). If we find 4 lines, we further check, if they are connected, closed, perpendicular and that they are equal as well (nested hierarchy of concept).

So, we took a complex task (identifying a square) and broke it in simple less abstract tasks. Deep Learning essentially does this at a large scale.

Example 2 – Cat vs. Dog

Let’s take an example of an animal recognizer, where our system has to recognize whether the given image is of a cat or a dog.


If we solve this as a typical machine learning problem, we will define features such as if the animal has whiskers or not, if the animal has ears & if yes, then if they are pointed. In short, we will define the facial features and let the system identify which features are more important in classifying a particular animal.

Now, deep learning takes this one step ahead. Deep learning automatically finds out the features which are important for classification, where in Machine Learning we had to manually give the features. Deep learning works as follows:

Deep learning works as follows:

  • It first identifies what are the edges that are most relevant to find out a Cat or a Dog
  • It then builds on this hierarchically to find what combination of shapes and edges we can find. For example, whether whiskers are present, or whether ears are present, etc.
  • After consecutive hierarchical identification of complex concepts, it then decides which of this features are responsible for finding the answer.
  1. Comparison of Machine Learning and Deep Learning

Now that you have understood an overview of Machine Learning and Deep Learning, we will take a few important points and compare the two techniques.

 2.1 Data dependencies

The most important difference between deep learning and traditional machine learning is its performance as the scale of data increases. When the data is small, deep learning algorithms don’t perform that well. This is because deep learning algorithms need a large amount of data to understand it perfectly. On the other hand, traditional machine learning algorithms with their handcrafted rules prevail in this scenario. Below image summarizes this fact.


2.2 Hardware dependencies

Deep learning algorithms heavily depend on high-end machines, contrary to traditional machine learning algorithms, which can work on low-end machines. This is because the requirements of deep learning algorithm include GPUs which are an integral part of its working. Deep learning algorithms inherently do a large amount of matrix multiplication operations. These operations can be efficiently optimized using a GPU because GPU is built for this purpose.

2.3 Feature engineering

Feature engineering is a process of putting domain knowledge into the creation of feature extractors to reduce the complexity of the data and make patterns more visible to learning algorithms to work. This process is difficult and expensive in terms of time and expertise.

In Machine learning, most of the applied features need to be identified by an expert and then hand-coded as per the domain and data type.

For example, features can be pixel values, shape, textures, position and orientation. The performance of most of the Machine Learning algorithm depends on how accurately the features are identified and extracted.

Deep learning algorithms try to learn high-level features from data. This is a very distinctive part of Deep Learning and a major step ahead of traditional Machine Learning. Therefore, deep learning reduces the task of developing new feature extractor for every problem. Like, Convolutional NN will try to learn low-level features such as edges and lines in early layers then parts of faces of people and then high-level representation of a face.


2.4 Problem Solving approach

When solving a problem using traditional machine learning algorithm, it is generally recommended to break the problem down into different parts, solve them individually and combine them to get the result. Deep learning in contrast advocates to solve the problem end-to-end.

Let’s take an example to understand this.

Suppose you have a task of multiple object detection. The task is to identify what is the object and where is it present in the image.


In a typical machine learning approach, you would divide the problem into two steps, object detection and object recognition. First, you would use a bounding box detection algorithm like grabcut, to skim through the image and find all the possible objects. Then of all the recognized objects, you would then use object recognition algorithm like SVM with HOG to recognize relevant objects.

On the contrary, in deep learning approach, you would do the process end-to-end. For example, in a YOLO net (which is a type of deep learning algorithm), you would pass in an image, and it would give out the location along with the name of object.

 2.5 Execution time

Usually, a deep learning algorithm takes a long time to train. This is because there are so many parameters in a deep learning algorithm that training them takes longer than usual. State of the art deep learning algorithm ResNet takes about two weeks to train completely from scratch. Whereas machine learning comparatively takes much less time to train, ranging from a few seconds to a few hours.

This is turn is completely reversed on testing time. At test time, deep learning algorithm takes much less time to run. Whereas, if you compare it with k-nearest neighbors (a type of machine learning algorithm), test time increases on increasing the size of data. Although this is not applicable on all machine learning algorithms, as some of them have small testing times too.

2.6 Interpretability

Last but not the least, we have interpretability as a factor for comparison of machine learning and deep learning. This factor is the main reason deep learning is still thought 10 times before its use in industry.

Let’s take an example. Suppose we use deep learning to give automated scoring to essays. The performance it gives in scoring is quite excellent and is near human performance. But there’s is an issue. It does not reveal why it has given that score. Indeed mathematically you can find out which nodes of a deep neural network were activated, but we don’t know what there neurons were supposed to model and what these layers of neurons were doing collectively. So we fail to interpret the results.

On the other hand, machine learning algorithms like decision trees give us crisp rules as to why it chose what it chose, so it is particularly easy to interpret the reasoning behind it. Therefore, algorithms like decision trees and linear/logistic regression are primarily used in industry for interpret ability

3.Where is Machine Learning and Deep Learning being applied right now?

The wiki article gives an overview of all the domains where machine learning has been applied. These include:

  • Computer Vision: for applications like vehicle number plate identification and facial recognition.
  • Information Retrieval: for applications like search engines, both text search, and image search.
  • Marketing: for applications like automated email marketing, target identification
  • Medical Diagnosis: for applications like cancer identification, anomaly detection
  • Natural Language Processing: for applications like sentiment analysis, photo tagging
  • Online Advertising, etc


The image given above aptly summarizes the applications areas of machine learning. Although it covers broader topic of machine intelligence as a whole.

One prime example of a company using machine learning / deep learning is Google.


In the above image, you can see how Google is applying machine learning in its various products. Applications of Machine Learning/Deep Learning are endless, you just have to look at the right opportunity!

4.Pop quiz

To assess if you really understood the difference, we will do a quiz. You can post your answers in this thread.

Please mention the steps below to completely answer it.

  • How would you solve the below problem using Machine learning?
  • How would you solve the below problem using Deep learning?
  • Conclusion: Which is a better approach?

Scenario 1:

You have to build a software component for self-driving car. The system you build should take in the raw pixel data from cameras and predict what would be the angle by which you should steer your car wheel.

Scenario 2:

Given a person’s credentials and background information, your system should assess whether a person should be eligible for a loan grant.

Scenario 3:

You have to create a system that can translate a message written in Russian to Hindi so that a Russian delegate can address the local masses.

5.Future Trends

The above article would have given you an overview of Machine Learning and Deep Learning and the difference between them. In this section, I’m sharing my viewies on how Machine Learning and Deep Learning would progress in the future.

  • First of all, seeing the increasing trend of using data science and machine learning in the industry, it will become increasing important for each company who wants to survive to inculcate Machine Learning in their business. Also, each and every individual would be expected to know the basics terminologies.
  • Deep learning is surprising us each and every day, and will continue to do so in the near future. This is because Deep Learning is proving to be one of the best technique to be discovered with state-of-the-art performances.
  • Research is continuous in Machine Learning and Deep Learning. But unlike in previous years, where research was limited to academia, research in Machine Learning and Deep Learning is exploding in both industry and academia. And with more funds available than ever before, it is more likely to be a keynote in human development overall.

I personally follow these trends closely. I generally get a scoop from Machine Learning/Deep Learning newsletters, which keep me updated with recent happenings. Along with this, I follow arxiv papers and their respective code, which are published every day.

End notes

In this article, we had a high-level overview and comparison between deep learning and machine learning techniques. I hope I could motivate you to learn further in machine learning and deep learning.  Here are the learning path for machine learning & deep learning Learning path for machine learning and Learning path for deep learning.

Source: All the above opinions are personal perspective on the basis of information provided by Analytics Vidhya


SAP Cash Application: Intelligent and Integrated Payment Clearing Automation for SAP S/4HANA powered by SAP Leonardo Machine Learning

May 23rd, 2017 by blogadmin No comments »

SAP Cash Application: Intelligent and Integrated Payment Clearing Automation for SAP S/4HANA powered by SAP Leonardo Machine Learning

With its promise for new levels of automation and employee productivity, artificial intelligence (AI) is one of the hottest topics in the market.

But unless you cut through the hype, it is sometimes hard to understand whether you can really apply these concepts to your everyday business processes – especially if you cannot afford substantial technology investments or scores of data scientists. For example, if you are in corporate finance or shared services, you know you could benefit from a combination of different automation scenarios for accounts payable (AP) and accounts receivable (AR) processes. But how can you get started with AI, if you do not have the right expertise in-house?

Gaining Higher Efficiency and Improving Working Capital Metrics

This novel cloud service delivers an advanced level of automation for the clearing of incoming payments in SAP S/4HANA. It uses machine learning to match incoming electronic bank payments to open receivables. SAP Cash Application either automatically clears incoming payments or suggests a short list of possible clearing matches that your employees can quickly investigate. With this level of automation, you not only gain efficiency as your AR team can handle higher transaction loads, but also improve your days sales outstanding (DSO) and working capital metrics. With machine learning, it is easy to reach high automation rates without the need of a hand-tuned system.

Advancing Automation and Reducing Maintenance and Costs at Alpiq

Another major advantage of using machine learning in your payment clearing process is that your application seamlessly adapts to changing conditions, as it constantly learns from new data patterns and actions your AR team takes to match exceptions. This aspect is extremely important for Alpiq, one of our early adopters of this solution. Alpiq started a co-innovation project with SAP, because they wanted a solution that could effectively automate the payment clearing process with minimum maintenance and implementation costs.

Before working with SAP, Alpiq had used a traditional rule-based approach to automating its payment clearing process. But, with constant format changes and the addition of new payment methods, maintaining the rules had quickly become a challenge. As a result of the co-innovation project, Alpiq is confident it will be able to rely on a single integrated environment that learns from accountants’ behavior and leverages both historical data and existing AR workflows – with minimal maintenance required. Most importantly, the company is looking at automation rates of over 92 percent, enabling their shared service teams to process higher transaction volumes, focus on strategic tasks, and scale with the business.

Easy Access and Consumption

If you are considering increasing automation in finance, SAP Cash Application is a great way to start. As a SAP Leonardo cloud service, it can be instantly provisioned, and it automatically works with your SAP S/4HANA implementation. It allows you to take a pragmatic approach to innovation since you can start with a well-defined process that you can monitor and measure. This is important whether you are approaching automation for the first time or, like Alpiq, you want to modernize your automation strategy to lower costs and increase efficiency.

About SAP

As market leader in enterprise application software, SAP (NYSE: SAP) helps companies of all sizes and industries run better. From back office to boardroom, warehouse to storefront, desktop to mobile device – SAP empowers people and organizations to work together more efficiently and use business insight more effectively to stay ahead of the competition. SAP applications and services enable more than 345,000 business and public sector customers to operate profitably, adapt continuously, and grow sustainably. For more information, visit www.sap.com.


Tags: AIAlpiqartificial intelligencemachine learningSAP Cash ApplicationSAP LeonardoSAP S/4HANASAPPHIRE NOW

Source: All the above are personal perspective on the basis of information provided by SAP on SAP Cash Application





May 12th, 2017 by blogadmin No comments »

“The Report Global Software Testing Market 2017-2021 provides information on pricing, market analysis, shares, forecast, and company profiles for key industry participants. – MarketResearchReports.biz”

The global software testing market is poised to exhibit a strong CAGR of 14% from 2017 to 2021, according to a report recently added to the growing repository of MarketResearchReports.biz. Titled “Global Software Testing Market 2017-2021”, the study highlights the current scenario in the market and its future trajectory. It answers some of the pertinent questions related to the software testing market, such as what is the current and projected size of the market, what are the major challenges and trends that are likely to impact the market, and what are the outcomes of the five forces analysis.

The report states that one of the most prominent factors boosting the uptake of software testing services is the surge in test automation services and agile testing services. Companies have been adopting these services to improve the quality of cloud infrastructure and to put into practice newer methodologies for software testing services. The market is also fueled by the rising pressure on software providers to offer business as well as product value.

Technology-wise, the market is bifurcated into product testing and application testing. The latter is not only the largest segment but is also poised to register the highest CAGR over the course of the forecast period. Application testing may include a range of services: mobile application testing, new offers testing, security testing, and functional as well as non-functional testing. In addition to this, the segment is fueled by the increasing demand for enterprise mobility.

In terms of end use, software testing services are in demand in sectors such as telecom, media, banking, financial services, and insurance (BFSI), IT, and retail. The BFSI sector accounts for the leading share owing to the rising need to help customers access financial services on the go.

Get Sample Copy Of this Report @

By way of geography, the worldwide market for software testing is segmented into Asia Pacific, Europe, the Middle East, and Africa, and the Americas. Currently, the Americas hold the largest share and this regional segment is slated to continue its dominance over the global market throughout the forecast period. Within the Americas, the banking and telecom industries are the main end users of software testing services, which can be attributed to growing consumerization of location-based applications and data services. The soaring adoption of and demand for cloud services is also sure to boost software testing market in the Americas.

View Press Release @ http://www.marketresearchreports.biz/pressrelease/4269

The most prominent players in the global software testing market are Capgemini, Wipro, IBM, and Accenture. These players together dominate the overall market, rendering its vendor landscape a highly consolidated nature. These few established companies have been joining forces with smaller vendors so as to improve their ongoing innovations and enhance their software testing offerings. The aforementioned players also enjoy a strong hold over most regional markets. Other major companies operating in the software testing market are Atos, UST Global, Steria, Gallop Solutions, CSC, Cigniti Technologies, Tech Mahindra, Deloitte, NTT DATA, and Infosys.

Source: All the above are personal perspective on the basis of information provided by Latest Software Testing News & MarketResearchReports.biz

Source: http://www.latestsoftwaretestingnews.com/?p=8738