SAP Cloud Platform Helps HR Leaders Connect and Extend SAP SuccessFactors Solutions

September 8th, 2017 by blogadmin No comments »

With simple access and easy connectivity to now more than 100 applications that complement the SAP SuccessFactors HCM Suite, HR leaders can meet specific and immediate business needs more easily than ever before. This announcement was made at SuccessConnect in Las Vegas taking place August 29–31 at The Cosmopolitan of Las Vegas.

“SAP has a vibrant partner ecosystem building innovative app extensions to SAP SuccessFactors solutions on SAP Cloud Platform,” said SAP SuccessFactors* President Greg Tomb. “These extensions help customers solve business problems, differentiate and innovate, and accelerate their digital HR transformation. With the power of our strong ecosystem, we enable our customers with cutting-edge technology that helps them execute against strategy and put their people at the center of business.”

SAP and its partners continue to drive rapid innovation and increase the strategic value of human resources across the enterprise. Among the extensions that will be on display at SuccessConnect in Las Vegas are:

  • EnterpriseAlumni by EnterpriseJungle: Integration of corporate alumni into HR landscapes immediately transforms and expands talent supply, including boomerang hires. Alumni management helps drive recruitment, business development and corporate evangelism.
  • org.manager by Ingentis: Workforce planning, org modeling and charting based on SAP SuccessFactors data support mergers and acquisitions by enabling visualization of multiple org structures and allowing drag and drop from one org chart to another.
  • Employee Engagement Suite by Semos: A new approach to employee engagement supports the growing HR needs in the areas of recognition and rewards, continuous feedback, organizational surveys, health and wellness, and employee work productivity.
  • Labor Management by Sodales: This employee relationship management solution offers features covering grievances, progressive disciplines, performance reviews and seniority rules management, including those involving unionized work environments. Management can perform investigations of each grievance with online collective bargaining agreements, manage grievance costs, conduct disciplinary actions and record all steps in a single place.

As the technology foundation for digital innovation with the SAP Leonardo system, SAP Cloud Platform allows organizations to connect people with processes and things beyond the SAP SuccessFactors HCM Suite and leverage transformative technologies including the Internet of Things (IoT), machine learning, Big Data and analytics, blockchain and data intelligence. The value of SAP Cloud Platform to customers includes:

  • Application extensions: Accelerate time to value for building and deploying apps and extensions that engage employees in new ways, allowing HR to be flexible and innovative without compromising their core HR process.
  • User experience: Achieve an intuitive, consistent and harmonized user experience across SAP SuccessFactors solutions and platform extensions. This empowers organizations to personalize their end-to-end HR landscape with a seamless, secure and beautiful user experience.
  • Data-driven insights: Leverage data from SAP SuccessFactors solutions and any other source to help make insightful business decisions that drive intelligent people strategies.

“Enterprise Information Resources Inc. (EIR) is an SAP Cloud Application partner, expert in optimizing and transforming compensation systems to provide business value immediately,” said France Lampron, president and CEO of Enterprise Information Resources Inc. “Capitalizing on the extensibility of SAP SuccessFactors solutions by leveraging SAP Cloud Platform provides us with an ideal development platform for EIR’s extension application — EIR Compensation Analytics.”

Solution extensions from SAP partners can be found in SAP App Center. HR professionals around the globe can virtually join key sessions at this year’s SuccessConnect in Las Vegas by registering here.

Source: All the above opinions are personal perspective on the basis of information provided by SAP

https://news.sap.com/sap-cloud-platform-helps-hr-leaders-connect-and-extend-sap-successfactors-solutions/

 

 

 

 

Bridging two worlds : Integration of SAP and Hadoop Ecosystems

September 6th, 2017 by blogadmin No comments »

Source: http://www.saphanacentral.com/2017/07/bridging-two-worlds-integration-of-sap.html

Proliferation of web applications, social media and internet-of-things coupled with large-scale digitalization of business processes has lead to an explosive growth in generation of raw data. Enterprises across the industries are starting to recognize every form of data as a strategic asset and increasingly leveraging it for complex data driven business decisions. Big Data solutions are being used to realize enterprise ‘data lakes’, storing processed or raw data from all available sources and powering a variety of applications/use-cases.

Big Data solutions will also form a critical component of enterprise solutions for predictive analytics and IoT deployments in near future. SAP on-premise and on-demand solutions, especially HANA platform will need closer integration with the Hadoop ecosystem.

What exactly is ‘Big Data’ ?
Data sets can be characterized by their volume, velocity and variety. Big Data refers to the class of data with one or more of these attributes significantly higher than the traditional data sets.B1

Big data presents unique challenges for all aspects of data processing solutions including acquisition, storage, processing, search, query, update, visualization, transfer and security of the data.

Hadoop Big Data Solution to the Rescue !

Hadoop is an open-source software framework for distributed storage and processing of Big Data using large clusters of machines.

Hadoop’s Modus Operandi –> Divide and Conquer : Break task in small chunks – store / process in parallel over multiple nodes – combine results

Hadoop is not the only available Big Data Solution. Several commercial distributions/variants of Hadoop and other potential alternatives exist which are architecturally very different.

Combining high speed in-memory processing capabilities of SAP HANA with Hadoop’s ability to cost-effectively store and process huge amounts of structured as well as unstructured data has limitless possibilities for business solutions.

Hadoop system can be an extremely versatile addition to any SAP business system landscape while acting as –

  • simple databaseand/or an low cost archive to extend the storage capacity of SAP systems for retaining large volumes historical or infrequently used data
  • flexible data storeto enhance the capabilities of persistence layer of SAP systems and provide efficient storage for semi-structured and unstructured data like xml-json-text-images
  • A massive data processing / analytics engineto extend or replace analytical / transformational capabilities of SAP systems including SAP HANA

b2

Getting acquainted : Hadoop Ecosystem

Hadoop at its core is a Java based software library which provides utilities/modules for distributed storage and parallel data processing across a cluster of servers. However, in common parlance the term ‘Hadoop’ almost invariably refers to an entire ecosystem which includes a wide range of apache open-source and/or commercial tools based on the core software library.

Hadoop is currently available either as a set of open-source packages or via several enterprise grade commercial distributions. Hadoop solution are available as SaaS/PaaS cloud offerings from multiple vendors in addition to traditional offerings for on-premise deployments.

Snapshot of prominent distributions/services according to latest market guides from Gartner and Forrester Research :

  • Apache Hadoop [open source]
  • Cloudera Enterprise | Cloudera CDH [open source]
  • Hortonworks Data Platform HDP [open source]
  • MapR
  • IBM Big Insights ^^
  • Amazon Elastic MapReduce (EMR)
  • Microsoft Azure HDInsight
  • Google Cloud Dataproc
  • Oracle Big Data Cloud Services
  • SAP Cloud Platform Big Data Service (formerly SAP Altiscale Data Cloud)(^^ Potentially Discontinued Solution)

Hadoop core components serve as foundation for entire ecosystem of data access and processing solutions.

  • Hadoop HDFS is a scalable, fault-tolerant, distributed storage system which stores data using native operating system files over a large cluster of nodes. HDFS can support any type of data and provides high degree of fault-tolerance by replicating files across multiple nodes.
  • Hadoop YARN and Hadoop Common provide foundational framework and utilities for resource management across the cluster

Hadoop MapReduce is a framework for development and execution of distributed data processing applications. Spark and Tez which are alternate processing frameworks based on data-flow graphs are considered to be the next generation replacement of MapReduce as the underlying execution engine for distributed processing in Hadoop.

Map       : Split and distribute job

Reduce   : Collect and combine results

B3

Variety of data access / processing engines can run alongside Hadoop MapReduce engine to process HDFS datasets. Hadoop ecosystem is continuously evolving with components frequently having some complementing, overlapping and/or similar appearing capabilities but with vastly different underlying architectures or approach.

Popular components-applications-engines-tools within Hadoop ecosystem (NOT an exhaustive list; several more open source and vendor specific applications are used by enterprises for specific use-cases)

  • Pig —Platform for development and execution of high-level language (Pig Latin) scripts for complex ETL and data analysis jobs on Hadoop datasets
  • Hive —Read only relational database that runs on top of Hadoop core and enables SQL based querying to Hadoop datasets; Supported by Hive-Metastore
  • Impala — Massively Parallel Processing (MPP) analytical database and interactive SQL based query engine for real-time analytics
  • HBase —NoSQL (non-relational) db which provides real-time random read/write access to datasets in Hadoop; Supported by HCatalog
  • Spark —In-memory data processing engine which can run either over Hadoop or standalone as an alternative / successor to Hadoop itself
  • Solr —Search engine / platform enabling powerful full-text search and near real-time indexing
  • Storm —Streaming data processing engine for continuous computations & real-time analytics
  • Mahout — Library of statistical, analytical and machine learning software that runs on Hadoop and can be used for data mining and analysis
  • Giraph —Iterative graph processing engine based on MapReduce framework
  • Cassandra —Distributed NoSQL (non-relational) db with extreme high availability capabilities
  • Oozie —Scheduler engine to manage jobs and workflows
  • Sqoop —Extensible application for bulk transfer of data between Hadoop and structured data stores and relational databases
  • Flume — Distributed service enabling ingestion of high-volume streaming data into HDFS
  • Kafka — Stream processing and message brokering system
  • Ambari —Web based tool for provisioning, managing and monitoring Hadoop clusters and various native data access engines
  • Zookeeper –Centralized service maintaining Hadoop configuration information and enabling coordination among distributed Hadoop processes
  • Ranger – Centralized framework to define, administer and manage fine-grained access control and security policies consistently across Hadoop components
  • Knox –Application gateway which act as reverse proxy, provides perimeter security for Hadoop cluster and enables integration with SSO and IDM solutions

Bridging two worlds – Hadoop and SAP Ecosystems

SAP solutions, especially SAP HANA platform, can be ‘integrated’ with Hadoop ecosystem using a variety of solutions and approaches depending upon the specific requirements of any use case.

SAP solutions which should be considered for the integration include :

  • SAP BO Data Services
  • SAP BO BI Platform | Lumira
  • SAP HANA Smart Data Access
  • SAP HANA Enterprise Information Management
  • SAP HANA Data Warehousing Foundation – Data Lifecycle Manager
  • SAP HANA Spark Controller
  • SAP Near-Line Storage
  • SAP Vora
  • …. <possibly more !>
  • Apache Sqoop [Not officially supported by SAP for HANA]

SAP HANA can leverage HANA Smart Data Access to federate data from Hadoop (access Hadoop as a data source) without copying the remote data into HANA. SDA enables data federation (read/write) using virtual tables and supports Apache Hadoop/Hive and Apache Spark as remote data sources in addition to many other database systems including SAP IQ, SAP ASE, Teradata, MS SQL, IBM DB2, IBM Netezza and Oracle.

Hadoop can be used as a remote data source for virtual tables in SAP HANA using following adaptors (in-built within HANA):

  • Hadoop/Spark ODBC Adaptor — Require installation of Unix ODBC drivers + Apache Hive/Spark ODBC Drivers on HANA server
  • SPARK SQL Adaptor — Require installation of SAP HANA Spark Controller on Hadoop Cluster (Recommended Adaptor)
  • Hadoop Adaptor (WebHDFS)
  • Vora Adaptor

SAP HANA can also leverage HANA Smart Data Integration to replicate required data from Hadoop into HANA. SDI provides pre-built adaptors & adaptor SDK to connect to a variety of data sources including Hadoop. HANA SDI requires installation of Data Provisioning Agent (containing standard adaptors) and native drivers for the remote data source, on a standalone machine. SAP HANA XS engine based DWF-DLM can relocate data from HANA to/between HANA Dynamic Tiering, HANA Extension nodes, SAP IQ and Hadoop via Spark SQL adaptor / Spark Controller.

B4

SAP Vora is an in-memory query engine that runs on top of Apache Spark framework and provides enriched interactive analytics on the data in Hadoop. Data in Vora can be accessed in HANA either directly via Vora Adaptor or via SPARK SQL Adaptor (HANA Spark Controller). Supports Hortonworks, Cloudera and MapR.

SAP BO Data Services (BODS) is a comprehensive data replication/integration (ETL processing) solution. SAP BODS has capabilities (via the packaged drivers-connectors-adaptors) to access data in Hadoop, push data to Hadoop, process datasets in Hadoop and push ETL jobs to Hadoop using Hive/Spark queries, pig scripts, MapReduce jobs and direct interaction with HDFS or native OS files. SAP SLT does not have native capabilities to communicate with Hadoop.

SAP Business Objects (BI Platform, Lumira, Crystal Reports, …) can access and visualize data from Hadoop (HDFS – Hive – Spark – Impala – SAP Vora) with ability to combine it with data from SAP HANA and other non-SAP sources. SAP BO applications use their in-built ODBC/JDBC drivers or generic connectors to connect to Hadoop ecosystem. Apache Zeppelin can used for interactive analytics visualization from SAP Vora.

Apache Sqoop enables bulk transfer of data between unstructured, semi-structured and structured data stores. Sqoop can be used to transfer data between Apache Hadoop and relational databases including SAP HANA (although not officially supported by SAP).

Getting started : Hadoop Deployment Overview

Hadoop is a distributed storage and processing framework which would typically be deployed across a cluster consisting of upto several hundreds/thousands of independent machines, each participating in data storage as well as processing. Each cluster node could either be a bare-metal commodity server or a virtual machine. Some organizations prefer to have a small number of larger-sized clusters; others choose a greater number of smaller clusters based on workload profile and data volumes.

HDFS Data is replicated to multiple nodes for fault-tolerance. Hadoop clusters are typically deployed with a HDFS replication factor of three which means each data block has three replicas – the original plus two copies. Accordingly, storage requirements of a Hadoop cluster is more than four times the anticipated input/managed dataset size. Recommended storage option for Hadoop clusters is to have nodes with local (Direct Attached Storage – DAS) storage. SAN / NAS storage can be used but is not common (inefficient ?) since Hadoop cluster is inherently a ‘share nothing architecture’.

Nodes are distinguished by their type and role. Master nodes provide key central coordination services for the distributed storage and processing system while worker nodes are the actual storage and compute nodes.

Node roles represent set of services running as daemons or processes. Basic Hadoop cluster consists of NameNode (+ Standby), DataNode, ResourceManager (+ Standby) and NodeManager roles. NameNode coordinates data storage on DataNodes while ResourceManager node coordinates data processing on NodeManager nodes within the cluster. Majority of nodes within the cluster are workers which typically will perform both DataNode and NodeManager roles however there can be data-only or compute-only nodes as well.

Deployment of various other components/engines from Hadoop ecosystem brings more services and node types/roles in play which can be added to cluster nodes. Node assignment for various services of any application is specific to that application (refer to installation guides). Many such components also need their own database for operations which is a part of component services. Clusters can have dedicated nodes for application engines with very specific requirements like in-memory processing and streaming.

Master / Management nodes are deployed on enterprise class hardware with HA protection; while the worker can be deployed on commodity servers since the distributed processing paradigm itself and HDFS data replication provide the fault-tolerance. Hadoop has its own built-in failure-recovery algorithms to detect and repair Hadoop cluster components.

Typical specification of Hadoop Cluster nodes depending on whether workload profile is storage intensive or compute intensive. All nodes do not necessarily need to have identical specifications.

  • 2 quad-/hex-/octo-core CPUs   and   64-512 GB RAM
  • 1-1.5 disk per core; 12-24 1-4TB hard disks; JBOD (Just a Bunch Of Disks) configuration for Worker nodes and RAID protection for Master nodesb5

Hadoop Applications (Data access / processing engines and tools like Hive, Hbase, Spark and Storm, SAP HANA Spark Controller and SAP Vora) can be deployed across the cluster nodes either using the provisioning tools like Ambari / Cloudera Manager or manually.

Deployment requirement could be : All nodes, at-least one node, one or more nodes, multiple nodes, specific type of node

B6

Getting started : Deployment Overview of ‘bridging’ solutions from SAP

SAP Vora consists of a Vora Manager service and a number of core processing services which can be added to various compute nodes of the existing Hadoop deployment

— Install SAP Vora Manager on management node using SAP Vora installer

— Distribute SAP Vora RPMs to all cluster nodes and install using Hadoop Cluster Provisioning tools — Deploy Vora Manager on cluster nodes using Hadoop Cluster Provisioning tools

— Start Vora Manager and Deploy various Vora services across the cluster using Vora Manager

SAP HANA Spark Controller provides SQL interface to the underlying Hive/Vora tables using Spark SQL and needs to be added to at-least one of the master nodes of the existing Hadoop deployment

— Install Apache Spark assembly files (open source libraries); not provided by SAP installer

— Install SAP HANA Spark Controller on master node using Hadoop Cluster Provisioning tools

SAP HANA Smart Data Integration and Smart Data Access are native components of SAP HANA and do not require separate installation. Smart Data Integration do however require activation of Data Provisioning server and installation of Data Provisioning Agents.

— Enable Data Provisioning server on HANA system

— Deploy Data Provisioning Delivery Unit on HANA system

— Install and configure Data Provisioning Agents on remote datasource host or a standalone host

SAP HANA Data Warehousing Foundation – Data Lifecycle Manageris an XS engine based application and requires separate installation on HANA platform.

SAP Cloud Platform Big Data Service (formerly SAP Altiscale Data Cloud) is a fully managed cloud based Big Data platform which provides pre-installed, pre-configured, production ready Apache Hadoop platform.

— Service includes Hive, HCatalog, Tez and Oozie from Apache Hadoop ecosystem in addition to the core Hadoop MapReduce / Spark and HDFS layers

— Supports and provide runtime environment for Java, Pig, R, Ruby and Python languages

— Supports deployment of non-default third-party applications from the Hadoop ecosystem

— Service can be consumed via SSH and webservices

— Essentially a Platform-as-a-Service (PaaS) offering and comprises :

— Infrastructure provisioning

— Hadoop software stack deployment and configuration

— Operational support and availability monitoring

— Enables customers to focus on business priorities and their analytic / data-science aspects

by delegating technical setup and management of the Hadoop platform software stack and

the underlying infrastructure to SAP’s cloud platform support team

Final thoughts

Gartner’s market research and surveys assert that Hadoop adoption is steadily growing and also shifting from traditional the monolithic, on-premise deployments to ad hoc or on-demand cloud instances.

Get ready to accomplish big tasks with big data !!

WHERE TO START WITH SAP S/4HANA

August 18th, 2017 by blogadmin No comments »

Have you ever wondered, “Where do I start with SAP S/4HANA?” There are four strategies you can begin immediately that will pay off with a smoother deployment.

These strategies are written with new SAP deployments in mind. For organizations already running SAP ERP and converting it to SAP S/4HANA, the strategies would be a bit different.

Prepare Your Business Users

Getting business users involved is as important as any technical aspect of the project. This is because SAP S/4HANA is not merely ERP running in-memory. SAP S/4HANA uses a simpler data model to transform business processes. For example, there is no more data reconciliation between finance and controlling in the financial period-end close, ending the most tedious and error-prone part of the entire process. This is a major productivity win for finance, of course, but it is still a change and one they need to know up front.

Financial close improvements are just one example. Business Value Adviser can help you understand the many other process improvements. Also, most successful SAP S/4HANA projects begin with a prototype, often running inexpensively in the cloud on a trial system.

Prepare Your Data

SAP is ready with a complete set of data migration tools including templates, data mapping, and data cleansing capability. You can start investigating the data mapping right away. Since SAP S/4HANA is built on a simpler data model and has fewer tables, getting data into SAP S/4HANA is easier than with other ERP systems.

You should also decide how much historical data you want to include. You can reduce cost by using data aging so that only the most useful data is stored in memory while the rest is available on disk-based systems.

Organize the Deployment Team

Organizations new to SAP have nothing to decide when it comes to the deployment path. You set up a new SAP S/4HANA deployment and migrate data from the legacy system. Organizations already running SAP ERP have more to do at this point, especially if converting their system to SAP S/4HANA.

Instead, focus on the deployment team, perhaps bringing SAP experts on board through hiring or teaming-up with an SAP partner. The most successful deployments do initial workshops for functional planning, setup prototype and test systems, and start getting end user feedback early on.

The deployment team should also familiarize themselves with SAP Activate, for the latest best practices and guided configuration.

Determine the Deployment Destination

The move to SAP S/4HANA is an ideal time to bring ERP into your cloud strategy. Since it is likely that an organization new to SAP does not have SAP HANA expertise, this makes SAP S/4HANA a prime candidate to run in the cloud.

Perhaps a more accurate term, though, would be clouds. You have a complete choice of deployment with SAP S/4HANA, including public cloud, IaaS (Amazon, Azure), and private cloud with SAP or partners. On premise is an option as well, of course.

Other ERP products are completely different from one deployment option to the next, and many don’t even have an on premise or private cloud option. Whether the destination is on premise or clouds, SAP S/4HANA uses the same code line, data model, and user experience, so you get the consistency essential to hybrid or growing environments. This means that instead of supporting disparate products, IT spends more time on business processes improvement.

Source: All the above opinions are personal perspective on the basis of information provided by SAP on SAP S/HANA

http://news.sap.com/where-to-start-with-sap-s4hana/

 

 

QA trends to be aware of in 2017

August 18th, 2017 by blogadmin No comments »

Based on recently published World Quality Report data and the most frequent QA demands from customers, below is a compiled list of QA trends that are going to make a difference in 2017. All of them are worth attention, as they can bring tangible benefits to QA vendors who build them into their service lines.

Test Automation

This is the undeclared king of QA services and remains the best way to speed release cycles, quickly test fixes, and rapidly evolve code in order to catch defects and not delay deployment. There is a still lot of room for improvement as many companies run manual testing, and are just now starting to think of adopting test automation.

Test automation is not an easy undertaking. It is quite challenging for many vendors and should be precisely tailored to the demands of every customer. It’s not a rare case when we have to deal with an already developed test automation solution that doesn’t fit the customer’s specific business context and has to be redeveloped.

Also, what we face is an increasing number of potential customers who worry their in-house QA team may be unable to handle the newly created solution. We understand their worries, and to comfort them a behavior-driven development (BDD) approach can be applied. It requires no script-writing skills and enables the QA team to more easily handle the automated tests.

Prediction

Script-less test automation will take on greater importance and test automation will keep going far beyond functional testing. It will provide opportunities for software testers to hone their full-cycle automation skills and not simply enhance their functional testing abilities.

Internet of Things

85% of World Quality Report participants say internet of things (IoT) products are part of their business operations. Most commonly, IoT devices and apps are tested for security, performance and usability.

Less frequently we test such aspects as compatibility, interoperability and resource utilisation, but they also matter to ensure flawless user experience.

To guarantee deep test coverage, we insist our testers should not just validate the device itself or its connection properties, but also to think outside of the box to check the most unthinkable, and rare scenarios, as the domain expertise is no longer enough for comprehensive testing of IoT products.

Example:

If you put on an Apple Watch, open the heart rate screen and then take the device off and place it on a pair of jeans or on a towel (fabric matters), the pulse rate reaches 200 beats per minute. Too much for a towel!

Is this situation real? And how should it be treated, as a bug or as functionality?

In fact, quite real: the user finishes his training in the gym, takes the device off and puts it on a towel. But to predict such a scenario a tester should act as a real user and test the device in real life.

Prediction

In the near future, the “out-of-the-box thinking” problem will be solved by means of artificial intelligence solutions. If designed well, they’ll provide real-time monitoring and analytics of IoT products.

Big Data

The digital revolution has led to the rise of big data. Large companies frequently ask for strategies to test big data systems that appear to be too large in volume to be managed in traditional ways.

The most frequent issues are not having enough storage space to back up test data, or not being able to manage the data on a single server. What we pay attention to when working on big data system quality assurance is the importance of verifying data completeness, ensuring data quality and automating regression testing. A tester has to be surgical about testing without taking a force approach.

Prediction

There will likely be new and innovative ways, methods and techniques to provide big data testing. Test automation will also be widely applied, as there is simply too much data and too little time to manually drill down into all the data.

New Service Types

Alongside traditional services, QA vendors are working to develop new services that will bring value to customers and help them gain a competitive advantage on the market. For example, our company has developed a baseline-testing offer. What is the idea? Baseline testing includes estimation of the application’s quality level in general and a roadmap to increasing its overall quality.

QA consulting and test advisory services are also gaining popularity. They’re needed among those customers who want to develop from scratch or improve in-house testing strategies rather than outsource testing needs on a regular basis.

Also, a vast number of testing service providers offer to establish a corporate Testing Center of Excellence. TCoE specialists run in-house tests and expand competencies of internal specialists.

Prediction

With traditional software testing services offered by many companies, QA vendors will think of new service lines to stay ahead of competitors.

Security Testing

It seems that trends are not only about emerging novelties. It’s also about popular topics. Security testing is one topic that will never go out of style. What QA providers should be preparing for now is to handle a steady increase in systematic testing of any type of software products, and to provide staff augmentation to enhance security testing and products development life cycle.

Mobile apps security is a significant field as the number of mobile devices and applications we download grows rapidly alongside with the number of attacks. Seemingly, the demand for security testing of mobile applications will also increase due to the large number of applications working with users’ personal data.

The importance of security testing of IoT products will also increase in 2017. The vulnerability of the IoT developments manifested itself when the Mirai botnet came up in 2016. It has been utilised by hackers to launch high-profile, DDoS attacks against different Internet properties and services. While there are mechanisms to mitigate DDoS attacks, there is no way to find out what the next target will be. Again, users should be aware of the threat of using simple passwords and opening devices to remote access. Security specialists should continuously expand their competencies working with all novelties on the market.

The importance of cloud computing security will also increase. More and more companies are resorting to the use of cloud-based solutions due to ease of use and the opportunity to quickly scale the architecture when needed. At the same time, cloud infrastructure is very attractive to any attacker, because it gives access to all company’s resources and personal information.

Prediction

To stop the vulnerability trend from becoming a reality in 2017, users, mobile app developers and software testers should join forces. Users have to become smarter downloaders and learn not to share personal data with every app installed; developers should follow, at the least, basic security practices; while testing engineers should be able to identify threats to the app and help develop countermeasures.

Source: All the above opinions are personal perspective on the basis of information provided by Software Testing News

http://www.softwaretestingnews.co.uk/qa-trends-to-be-aware-of-in-2017/

 

Maximize the Value of Cloud ERP with SAP S/4HANA Cloud

August 4th, 2017 by blogadmin No comments »

Today’s business is moving faster than ever thanks to digital technologies. In this environment, gaining a digital edge means gaining the ability to focus on what matters most to the future of your business and your customers so you don’t fall behind.

In this discussion with early adopters of SAP S/4HANA Cloud, it’s become clear why intelligent cloud ERP is at the core of their digital value creation.

Intelligent cloud ERP is not coming — it is here. It’s ready to be adopted, consumed, and built on.

In the cloud, SAP is going from the system of record to the system of innovation. Built on 45 years of experience with best practices, and translated that into a set of capabilities only possible with the cloud.

What that means for customers is access to a system that is fast to implement, easy to use, free from infrastructure maintenance, built to provide the highest process standards, and constantly upgrading to offer the latest in machine learning and other innovations. Time to value is dramatically reduced, innovation is delivered by SAP continuously and customers and partners are enabled to deliver innovation themselves.

Three themes emerged in the discussion to show why customers are finding a competitive edge and generating new value with our intelligent cloud ERP solution.

Speed is a Strategic Advantage

Business is moving incredibly fast, which is why it’s essential that you have a rapid-to-adopt intelligent core ERP. Intelligent cloud ERP enables you to quickly implement your new system across the organization and bring your core processes on board.

Beyond adoption, cloud also enables fast upgrades. We are pushing out regular quarterly upgrades and releases per year, providing customers with the latest technologies. This velocity of upgrades and new capabilities isn’t possible with an on-premises adoption.

And there is another aspect of speed customers can’t live without: the speed to see data in real-time and act on it. Real-time architecture provides data (knowledge) — and also the tools to act on it quickly.

Standardization to Best Practices is Critical

SAP S/4HANA Cloud is based on a Fit-to-Standard model that embeds best practices across the company. This model brings processes into alignment based on decades of experience and unique knowledge of effective and efficient core business processes. It’s a major break from the complex, customized ERP implementations of yesterday.

Customers tell us the Fit-to-Standard model helps them understand what they need to do and how to accomplish processes more efficiently.

According to Richard St-Pierre, president of C2 International, “The key value a cloud ERP unlocks is a simple way to operate. Especially for a smaller or midsize organisation, with at all the legal, encryption and backup requirements. It’s simply overwhelming for a small shop that doesn’t have a large IT team.  It is becoming way too complex for a company to operate. So, a cloud-based solution like S/4HANA Cloud is really the only option. The alternative simply requires too many resources that we don’t have.”

With a standardized core, they can move at higher velocity and take advantage of new technologies that differentiate their services, rather than focusing energy on creating customized solutions to common industry processes.

Simplicity Powers Business Success

One of the most popular innovations within SAP S/4HANA Cloud is the consumer-grade usability. Executives are confident that they can speed adoption internally with the easy-to-use interface, and for businesses with a distributed workforce, the full mobile capabilities increase the value of the solution exponentially. Instead of having to search out tasks that need attention, users have actions pushed to them, just as they recognize with a Twitter or podcast notification.

More strategically, the simplicity of SAP S/4HANA Cloud means customers don’t focus on infrastructure. Instead, they focus on the business, with the confidence that they’ll be supported on a regular basis — and receive quarterly updates that make the solution ever more valuable. Intelligent cloud ERP grows with you, whether growth is in scale or business function, without a team maintaining infrastructure and implementing potentially disruptive updates.

Intelligent Cloud ERP is Here Now

Competitors are already adopting a solution that provides greater speed, simplicity, and standardized best practices, and receiving regular infusions of the latest in machine learning and analytics capabilities. Don’t fall behind with outdated technology or an outmoded approach. Instead, learn how you can create value on the new intelligent cloud ERP.

Source: All the above opinions are personal perspective on the basis of information provided by SAP on SAP S/HANA Cloud platform

http://news.sap.com/maximize-the-value-of-cloud-erp-with-sap-s4hana-cloud/

How To Survive as a QA in a Software Development Team

August 4th, 2017 by blogadmin No comments »

It is not always easy to be a software tester in a software development team. Developers will often consider software quality assurance (QA) people as inferior and would wonder how they could question the perfection of the beautiful code that they have just written. This article, discusses some of the physical and psychological issues that software testers are facing and propose solutions to avoid them.

What is usually considered in the first place when it comes to a discussion of a successfully finished software development project? Developers’ efforts, used stack of technologies, pros and cons of the chosen development methodology, etc. But the issues which the software quality assurance (QA) team members face from day to day are usually barely mentioned. In this article we will try to shed some light on this question and consider possible solutions of the most significant problems.

The list of day-to-day disagreements that can crush the will of a QA specialist can be divided into two groups: physical and psychological aspects. Physical aspects are caused by the peculiarities of the industry and describe the trends characteristic for most companies. Psychological aspects mostly belong to the sphere of interpersonal communication. You may or may not face them during your professional activities. It depends on the experience of your colleagues, features of project management, and other aspects specific to a specific company.

Physical Aspects: QA to Developers Ratio and Gender Imbalance

This section contains the issues associated with the software development industry itself. Even if you don’t have any experience as a member of a software testing company, you can predict some of the issues by analyzing the available statistical data. For example, let’s take a look at the ratio of testers to developers. Choosing the proper number of the QA team members depends on many aspects. Not all companies follow the Microsoft’s golden rule: “Keep developers to testers ratio equal to 1:1” and the most common ratio is 1 tester to 3 developers. It could even be 1 tester to 10 developers in some cases. The point is that in the most optimistic scenario, a QA specialist will have to handle the code written by three different developers. As a minimum. If the workflow was not adequately planned, you could find yourself overwhelmed with tasks very quickly. It can lead to a decline in productivity, stress, and frustration. Some software development approaches, like Scrum for example, imply regular meetings that can help discuss what has been done, what needs to be done, and what problems does the team have. This Agile context is a good chance for a software tester to attract attention to a great many tasks he or she has. But in the case of larger V model or Waterfall based projects that don’t imply regular meetings by design, there should be a mechanism for communication between teams. The project manager has to ensure that there are no unspoken opinions and QA team members are free to discuss the problems they face as well as ideas to solve them.

The next issue is related to the gender imbalance in the IT industry. According to the statistics, developers are mostly men and the world of QA is represented mainly by women. This situation can lead to different issues. The most obvious consequence is that the relationships between these teams can go far beyond professional etiquette. The problem can take different forms. The most innocuous of which is the common difficulties of communication between groups of males and females. Inappropriate behavior and flirting are fraught with more serious consequences and can influence the psychological climate inside the company. Strict company policy regarding inappropriate behavior in the workplace should be brought to the attention of all employees.

Psychological Aspects: “It Works on my PC” and why Team Building is Important

From time to time every QA team member faces the situation when a developer or a manager disagrees that a detected bug is a bug despite all the evidences. The arguments may vary. The most common situation usually looks like this. You come to the developer and describe the bug that you have just detected. But instead of running to his working place and fixing everything, he makes a helpless gesture and tells you: “But everything works fine on my PC!” Attempts to convince him to your point of view may spoil the relationships within the team and complicate further work on the project. It may look like a complex psychological issue, but it may have a relatively simple technical solution. Make sure that QA and development teams run the same environment. This approach can help to avoid the above problem.

Despite the fact that the peculiarities of the profession associated with solving technical issues, non-tech problems can be pretty difficult to overcome. Since you work with other people you should always remember that the impact of human factors such as subjective judgment can’t be underestimated. For example, what should you do when the project manager is so intended to finish the project as soon as possible, so he insists that a detected bug is not a bug at all and there are no reasons to spend extra time and efforts on solving it? Strict specifications and knowledge of products of large companies with similar functionality are magic pills that can prevent the possibility of discrepancies. Indeed, if your company has clearly defined what is a bug and what is a feature, then you don’t even have to convince anyone of your conscientiousness and can be sure that the work is not done in vain. When you know exactly what your potential competitors have to offer to their customers and there is a bug that can break the functionality of your product and deprive you of a competitive advantage, ignoring it can negate all efforts. Any manager in such situation will treat the opinion of the testing team with due attention.

The burden of responsibility for the quality of the released product is usually on the QA team. Unfortunately, in most cases, the ideas for improving the workflow proposed by testers are ignored. You can imagine what psychological pressure can be caused by the described situation. You must release the product in a limited time, you have no influence on how everything is done, and at the same time, you are considered responsible for its quality. It is an unpleasant situation indeed. The power to resist such challenges usually come along with the experience. If you can’t change the situation, you have to adapt to it. Even if you have to work on a tight deadline, don’t rush. Spend as much time as you need to prioritize code checking and depth of code coverage correctly, and you will be able to avoid undesirable consequences.

In most cases, the team of developers keeps apart from the QA team. Developers stick together, share common interests, and keep the distance from a person that looks at their code as a pile of bugs. Taking into account the fact that the overall number of testers in the company is lower than the number of developers, a QA team member can feel himself as an outcast sometimes. It can lead to a situation when testers and developers perceive each other as members of different castes but not the parts of a common mechanism. Various forms of team building activities can correct the situation. We do not necessarily talk about something costly. The key goal of team building is learning to solve problems together. In order to do this, you don’t have to climb the mountain along with your colleagues. This sounds exciting, but is not necessary. There are a lot of activities that can be held in your own office and will not take more than 20 minutes. Show some creativity and you will find dozens of ways of creating a team spirit.

Conclusion

The climate within a software development company is a pretty sensitive theme. The IT industry is a heterogeneous environment, and there is no single solution that can fit all the companies. Creating a good team spirit is a job that requires adaptation to conditions and flexibility. The desire to show one’s will despite everything will rather harm than benefit. We hope that combining the approaches presented in the article, you will be able to find your own and unique way of building the software development dream team.

Source: All the above opinions are personal perspective on the basis of information provided by Software Testing Magazine

http://www.softwaretestingmagazine.com/knowledge/how-to-survive-as-a-qa-in-a-software-development-team/

 

SAP has moved beyond ERP

July 11th, 2017 by blogadmin No comments »
Majority of people who are aware of SAP think that SAP is just an ERP solution, but it is not true. Over the period of last few years SAP is transformed itself. SAP is now providing Cloud Computing and Digital Solutions. Presently more than 50% of comes from non-ERP.
HANA (High performance analytic appliance) was launched by SAP sometime in 2010 and it has been a very successful tool so far. The biggest benefit of this tool is realised by senior management as the provides real time business information which is essential for prompt decision. As we all know that SAP is a costly solution due to very high licencing fees. The cloud version of HANA has resolved this issue as well by offering “Pay per Use” options.
Besides this SAP has also acquired many cloud-centric solutions like Hybris, SuccessFactors, Ariba, Concur and Fieldglass. You will be surprised to know that presently  SAP has more than 200 million users on the cloud.
The core ERP solution is now called S/4HANA. This solution can be implemented within premises or on the cloud. This has come as a big relief for MSME organisations who find it difficult to implement SAP due to huge license cost. Companies are slowly migrating  from traditional ERP to S4/HANA.
Though there are many ERP solutions in India market but SAP is market share is more than 50% and there is a big demand for SAP skilled resources in India. One of the biggest source of demand is due to ongoing “Support and Maintenance” activities and rollout of GST. Incorporating GST changes in every IT solution has created a big demand for ERP skilled resources.
SAP Digital Boardroom which is primarily used to simplify and provide the performance reporting of all the business areas in real time. It has got fully automated business intelligence capabilities.
SAP is also investing a lot of money in machine learning, Blockchain innovations and IoT. I am sure something new will emerge very soon.
SAP is market leader in ERP in India and has got huge potential for job creation. If you are a part of business cycle and want to pursue your career in SAP consultancy then it is the right time to get yourself trained in SAP.

Source: Blog by RK Bajpai

http://bajpairk1.blogspot.in/2017/07/sap-has-moved-beyond-erp.html

 

Big Data Learning Path for all Engineers and Data Scientists out there

July 7th, 2017 by blogadmin No comments »

Introduction

The field of big data is quite vast and it can be a very daunting task for anyone who starts learning big data & its related technologies. The big data technologies are numerous and it can be overwhelming to decide from where to begin.

This is the reason I thought of writing this article. This article provides you a guided path to start your journey to learn big data and will help you land a job in big data industry.  The biggest challenge we face is identifying the right role as per our interest and skillsets.

To tackle this problem, I have explained each big data role in detail and also considering different job roles of engineers and computer science graduates.

I have tried to answer all your questions which you have or will encounter while learning big data. To help you choose a path according to your interest I have added a tree map which will help you identify the right path.

 Table of Content

  1. How to get started?
  2. What roles are up for grabs in the big data industry?
  3. What is your profile, and where do you fit in?
  4. Mapping roles to Big Data profiles
  5. How to be a big data engineer?
  • What is the big data jargon?
  • Systems and architecture you need to know
  • Learn to design solutions and technologies
  1. Big Data Learning Path
  2. Resources

 

  1. How to get started?

One of the very first questions that people ask me when they want to start studying Big data is, “Do I learn Hadoop, Distributed computing, Kafka, NoSQL or Spark?”

 

Well, I always have one answer: “It depends on what you actually want to do”.

 

So, let’s approach this problem in a methodical way. We are going to go through this learning path step by step.

  1. What roles are up for grabs in the big data industry?

There are many roles in the big data industry. But broadly speaking they can be classified in two categories:

  • Big Data Engineering
  • Big Data Analytics

These fields are interdependent but distinct.

The Big data engineering revolves around the design, deployment, acquiring and maintenance (storage) of a large amount of data. The systems which Big data engineers are required to design and deploy make relevant data available to various consumer-facing and internal applications.

While Big Data Analytics revolves around the concept of utilizing the large amounts of data from the systems designed by big data engineers. Big Data analytics involves analyzing trends, patterns and developing various classification, prediction & forecasting systems.

Thus, in brief, Big data analytics involves advanced computations on the data.  Whereas big data engineering involves the designing and deployment of systems & setups on top of which computation must be performed.

 3.What is your profile and where do you fit in?

Now, we know what categories of roles are available in the industry, let us try to identify which profile is suitable for you. So that, you can analyze where you may fit in the industry.

Broadly, based on your educational background and industry experience we can categorize each person as follows:

  • Educational Background

(This includes interests and doesn’t necessarily point towards your college education).

  1. Computer Science
  2. Mathematics
  • Industry Experience
  1. Fresher
  2. Data Scientist
  3. Computer Engineer (work in Data related projects)

 Thus, by using the above categories you can define your profile as follows:

Eg 1: “I am a computer science grad with no experience with fairly solid math skills”.

You have an interest in Computer science or Mathematics but with no prior experience you will be considered a Fresher.

Eg 2: “I am a computer science grad working as a database developer”.

Your interest is in computer science and you are fit for a role of a Computer Engineer (data related projects).

Eg 3: “I am a  statistician working as a data scientist”.

You have an interest in Mathematics and fit for a role of a Data Scientist.

So, go ahead and define your profile.

(The profiles we define here are essential in finding your learning path in the big data industry).

  1. Mapping roles to profiles

Now that you have defined your profile, let’s go ahead and map the profiles you should target.

 4.1 Big Data Engineering roles

If you have good programming skills and understand how computers interact over the internet (basics) but you have no interest in mathematics and statistics. In this case, you should go for Big data engineering roles.

4.2 Big data Analytics roles

If you are good at programming and have your education and interest lies in mathematics & statistics, you should go for Big data Analytics roles.

  1. How to be a big data Engineer?

Let us first define what a big data Engineer needs to know and learn to be considered for a position in the industry. The first and foremost step is to first identify your needs. You can’t just start studying big data without identifying your needs. Otherwise, you would just be shooting in that dark.

In order to define your needs, you must know the common big data jargon. So let’s find out what does big data actually means?

5.1 The Big Data jargon

A Big data project has two main aspects –  data requirements and the processing requirements.

  • 5.1.1 Data Requirements jargon

Structure:  As you are aware that data can either be stored in tables or in files. If data is stored in a predefined data model (i.e has a schema) it is called structured data. And if it is stored in files and does not have a predefined model it is called unstructured data. (Types: Structured/ Unstructured)

Size:  With size we assess the amount of data. (Types: S/M/L/XL/XXL/Streaming)

Sink Throughput: Defines at what rate data can be accepted into the system. (Types: H/M/L)

Source Throughput: Defines at what rate data can be updated and transformed into the system. (Types: H/M/L)

  • 1.2 Processing Requirements jargon

Query time: The time that a system takes to execute queries. (Types: Long/ Medium /Short)

Processing time: Time required to process data (Types: Long/Medium/Short)

Precision: The accuracy of data processing (Types: Exact/ Approximate)

5.2 Systems and architecture you need to know

Scenario 1: Design a system for analyzing sales performance of a company by creating a  data lake from multiple data sources like customer data, leads data, call center data, sales data, product data, weblogs etc.

5.3 Learn to design solutions and technologies

Solution for Scenario 1: Data Lake for sales data

(This is my personal solution, you may come up with a more elegant solution if you do please share below.)

So, how does a data engineer go about solving the problem?

A point to remember is that a big data system must not only be designed to seamlessly integrate data from various sources to make it available all the time, but it must also be designed in a way to make the analysis of the data and utilization of data for developing applications easy, fast and always available (Intelligent dashboard in this case).

Defining the end goal:

  1. Create a Data Lake by integrating data from multiple sources.
  2. Automated updates of the data at regular intervals of time (probably weekly in this case)
  3. Data availability of analysis (round the clock, perhaps even daily)
  4. Architecture for easy access and seamless deployment of an analytics dashboard.

Now that we know what our end goals are, let us try to formulate our requirements in more formal terms.

  • 3.1 Data related Requirements

 

Structure: Most of the data is structured and has a defined data model. But data sources like weblogs, customer interactions/call center data, image data from the sales catalog, product advertising data. Availability and requirement of image and multimedia advertising data may depend on from company to company.

Conclusion: Both Structured and unstructured data

Size: L or XL (choice Hadoop)

Sink throughput: High

Quality: Medium (Hadoop & Kafka)

Completeness: Incomplete

  • 5.3.2 Processing related Requirements

Query Time: Medium to Long

Processing Time: Medium to Short

Precision: Exact

As multiple data sources are being integrated, it is important to note that different data will enter the system at different rates. For example, the weblogs will be available in a continuous stream with a high level of granularity.

Based on the above analysis of our requirements for the system we can recommend the following big data setup.

 

Blog 1

  1. Big Data Learning Path

Now, you have an understanding of the big data industry, the different roles and requirements from a big data practitioner. Let’s look at what path you should follow to become a big data engineer.

As we know the big data domain is littered with technologies. So, it is quite crucial that you learn technologies that are relevant and aligned with your big data job role. This is a bit different than any conventional domains like data science and machine learning where you start at something and endeavor to complete everything in the field.

Below you will find a tree which you should traverse in order to find your own path. Even though some of the technologies in the tree are pointed to be data scientist’s forte but it is always good to know all the technologies till the leaf nodes if you embark on a path. The tree is derived from the lambda architectural paradigm.

Blog 2

With the help of this tree map, you can select the path as per your interest and goals. And then you can start your journey to learn big data.

One of the essential concepts that any engineer who wants to deploy applications must know is Bash Scripting. You must be very comfortable with linux and bash scripting. This is the essential requirement for working with big data.

At the core, most of the big data technologies are written in Java or Scala. But don’t worry, if you do not want to code in these languages you can choose Python or R because most of the big data technologies now support Python and R extensively.

Thus, you can start with any of the above-mentioned languages. I would recommend choosing either Python or Java.

Next, you need to be familiar with working on the cloud. This is because nobody is going to take you seriously if you haven’t worked with big data on the cloud. Try practicing with small datasets on AWS, softlayer or any other cloud provider. Most of them have a free tier so that students can practice. You can skip this step for the time being if you like but be sure to work on the cloud before you go for any interview.

Next, you need to learn about a Distributed file system. The most popular DFS out there is Hadoop distributed file system. At this stage you can also study about some NoSQL database you find relevant to your domain. The diagram below helps you in selecting a NoSQL database to learn based on the domain you are interested in.

The path until now are the mandatory basics which every big data engineer must know.

Now is the point that you decide whether you would like to work with data streams or dormant large volumes of data. This is the choice between two of the four V’s that are used to define big data (Volume, Velocity, Variety and Veracity).

So let’s say you have decided to work with data streams to develop real-time or near-realtime analysis systems. Then you should take the Kafka path. Else you take the Mapreduce path. And thus you follow the path that you createDo note that, in the Mapreduce path you do not need to learn pig and hive. Studying only one of them is sufficient.

In summary: The way to traverse the tree.

  1. Start at the root node and perform a depth-first traversal style.
  2. Stop at each node check out the resources given in the link.
  3. If you have decent knowledge and are reasonably confident at working with the technology then move to the next node.
  4. At every node try to complete at least 3 programming problems.
  5. Move on to the next node.
  6. Reach the leaf node.
  7. Start with the alternative path.

Did the last step (#7) baffle you! Well truth be told, no application has only stream processing or slow velocity delayed processing of data. Thus, you technically need to be a master at executing the complete lambda architecture.

Also, note that this is not the only way you can learn big data technologies. You can create your own path as you go along. But this is a path which can be used by anybody.

If you want to enter the big data analytics world you could follow the same path but don’t try to perfect everything.

For a Data Scientist capable of working with big data you need to add a couple of machine learning pipelines to the tree below and concentrate on the machine learning pipelines more than the tree provided below. But we can discuss ML pipeline later.

Add a NoSQL database of choice based on the type of data you are working with in the above tree.

Blog 3

As you can see there are loads of NoSQL databases to choose from. So it always depends on the type of data that you would be working with.

And providing a definitive answer to what type of NoSQL database you need to take into account your system requirements like latency, availability, resilience, accuracy and of course the type of data that you are dealing with.

Source: All the above opinions are personal perspective on the basis of information provided by Analytics Vidhya

https://www.analyticsvidhya.com/blog/2017/03/big-data-learning-path-for-all-engineers-and-data-scientists-out-there/

 

 

Which Software Testing Career Path and Certification is Right for You?

June 28th, 2017 by blogadmin No comments »

Are you wondering which ISTQB certification is right for you? The following is a short explanation of the various certifications and the way you might want to proceed based on your career goals. The tricky thing in our industry is the constant change. The skills of today may or may not be marketable tomorrow, so while you’re thinking about what you want to do, you should also consider what you need to do to be able to get or retain the job you want.

Foundation Level – This is where you need to start. This is the gateway certification for all the other ISTQB certifications. This level is designed for beginners all the way up to those who have been in the industry for a while (or maybe a long while) and need to brush up their skills and update their terminology. One of the fastest ways to fail in an interview is to use the wrong terms for testing processes, documents and techniques. Organizations tend to adopt their own terminology and it helps to have a base of “standard” terminology, particularly before you venture out for an interview.

The Foundation Level is in the process of being expanded to include several extension modules. Right now, the agile extension is due to be available in early 2014 and work is starting on the model-based testing extension. These are separate certifications you can get that are “added on” to your Foundation Level certification.

Advanced Level – This is where you need to start making decisions. What do you like to do? What do you want to do? Where are the most opportunities?

Advanced Level – Test Analyst – If you are not very technically minded, and would rather work with the user, know the business application and apply your skills more for analysis than programming, you want to pursue the Advanced Test Analyst certification. This certification is designed for the strong tester who has a deep understanding of the business domain and the needs of the user. You’ll learn about designing good test documentation, conducting effective reviews, and participating in risk analysis sessions (particularly to help determine the impact of a realized risk to the business user). You’ll learn about how you can contribute the test information (input data, action, expected results) to the test automation effort and you’ll learn about usability testing. You’ll also build upon the test techniques you learned at the Foundation Level and will learn new techniques such as domain analysis and cause-effect graphing, as well as how to test using use cases and user stories. You’ll learn more about defect-based and experience-based techniques so you’ll know how to pick an appropriate defect taxonomy and how to implement traceable and reproducible exploratory and checklist-based testing. Let’s not forget process improvement as well. You’ll learn what to track in your defect management to be sure you have the information to figure out what could be improved in your process and how you can do it. This certification is designed for the person who wants to spend their time testing, not programming or delving into the code or troubleshooting technical issues.
The path for the Advanced Test Analyst at the Expert Level will include a further specialization in usability testing and further development of testing techniques. At this point, these new syllabi are being discussed but will not be available until at least 2015.

Advanced Level – Technical Test Analyst – OK, admit it, you really like to play in the code. You like to review it, program tests to test it and create test automation and tools. If this is describing you, you definitely need to be looking at the Advanced Technical Test Analyst certification. This certification is designed for the technically minded individual who wants to and is capable of programming, both in scripting languages (e.g., python) as well as standard programming languages (e.g., java). You’ll learn how to approach white-box testing to find the difficult problems that are often missed by the black-box testing that is usually done by the Test Analyst. You will learn strong testing techniques that will allow you to systematically test decision logic, APIs and code paths. You will also learn about static and dynamic analysis techniques and tools (stamp out those memory leaks!). You will learn about testing for the technical quality characteristics such as efficiency (performance), security, reliability, maintainability, and portability. You’ll learn how to do effective code and architectural reviews. And, you’ll learn about tools – using tools, making tools, and a little about selecting the right tools. After all, you wouldn’t want to accidentally get a tool that creates code mutants (really, that’s a legitimate tool usage) when you really wanted a simulator. And did I mention automation? You will learn the basis for automation that will be built on at the Expert Level.

The Advanced Technical Test Analyst certification is the gateway to the Expert Level for Test Automation (Engineering) and Security. The Test Automation (Engineering) syllabus and the Security syllabus and their associated certifications are likely to be available in 2014 or early 2015.

Advanced Test Manager – Those who can, do, and those who can’t, manage? Well, that’s not usually a successful formula for a test manager. If you are a test manager or want to be one, and you are willing to learn all the necessary techniques and practices to be successful, then this certification is the one for you. You will learn all about test planning, monitoring and controlling for projects but you will also learn about establishing test strategies and policies that can change the course of testing for the organization. You will learn about how to effectively manage, both people and projects, and will learn the importance and application of metrics and estimation techniques. You will learn your role in reviews. You will learn how to effectively manage defects and how to focus on improving the test process. You will also learn the importance and proper usage of tools and be able to set realistic expectations regarding tool usage. So, if you like telling people what to do, and they tend to listen to you, this is probably the right certification for you. However, that said, remember that technical people respect technical people, so rather than just getting the Advanced Test Manager certification, you should think about also getting at least the Advanced Test Analyst certification as well.
The Advanced Test Manager certification is the gateway to the Expert Levels for Improving the Test Process and Test Management. The Expert Level Improving the Test Process certification focuses on various techniques and models that are used for test process improvement. This provides a good coverage of the most popular models and provides information regarding how to approach an improvement effort to net an effective result. The Expert Level Test Management certification focuses on honing those strategic, operational and personnel skills to make you the best test manager you can be. There is significant discussion in the syllabus about how to be sure your department is performing well and is receiving the accolades it deserves. There is also realistic information regarding managing people effectively and dealing with difficult situations.

The Advanced Test Manager certification is also a pre-requisite for the management part of the Expert Level Test Automation certification. This focuses on how to effectively manage an automation project, including getting the right tools, resources, budget and timeframe. This syllabus should be available in late 2014 or early 2015.

Which Way to Go?
It’s entirely up to you. As you can see, there are several ways you can go with the certification path. And remember, for example, you might not want to get the Advanced Technical Test Analyst certification if you are a test manager, but you can always read the free syllabus and learn something even without a big time investment. They make for interesting reading, even if you are not planning the particular career path that is indicated. Our industry is constantly changing and new syllabi are always in the works. If you plan to head for the Expert Level, it’s a good idea to start planning your path now as that may determine which Advanced certification(s) you will need. Keep an eye on the ISTQB web site for new additions to the syllabus family. And remember to train, not just for your current job, but for the next job you want to get. Right now, the job market is hot for those with the skills of the Advanced Technical Test Analyst. There is always a need for good test managers. Note the emphasis on the word “good”. And, many companies want Advanced Test Analyst’s as well because of the need for black-box testing and strong domain knowledge. Right now, the biggest growth in in the Advanced Technical Test Analyst area, but that can change quickly. Get your training now, so you’ll be ready.

It’s unlikely that we will run out of work anytime in the future because, as long as there are developers, there will be a need for testers. It’s built in job security! Plan and train for your future. It’s looking bright!

Source: All the above opinions are personal perspective on the basis of information provided by CSTB

http://cstb.ca/which-software-testing-career-path-and-certification-is-right-for-you

Deep Learning vs. Machine Learning-The essential differences you need to know!

June 8th, 2017 by blogadmin No comments »

b1

Machine learning and deep learning on a rage! All of a sudden everyone is talking about them – irrespective of whether they understand the differences or not! Whether you have been actively following data science or not – you would have heard these terms.

Just to show you the kind of attention they are getting, here is the Google trend for these keywords:

B2

If you have often wondered to yourself what is the difference between machine learning and deep learning, read on to find out a detailed comparison in simple layman language. We have explained each of these terms in detail.

 Table of Contents

  1. What is Machine Learning and Deep Learning?
    1. What is Machine Learning?
    2. What is Deep Learning?
  2. Comparison of Machine Learning and Deep Learning
    1. Data Dependencies
    2. Hardware Dependency
    3. Problem Solving Approach
    4. Feature Engineering
    5. Execution time
    6. Interpretability
  3. Where is Machine Learning and Deep Learning being applied right now?
  4. Pop Quiz
  5. Future Trends
  6. What is Machine Learning and Deep Learning?

Let us start with the basics – What is Machine Learning and What is Deep Learning. If you already know this, feel free to move to section 2.

 1.1 What is Machine Learning?

The widely-quoted definition of Machine learning by Tom Mitchell best explains machine learning in a nutshell. Here’s what it says:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E ”

Did that sound puzzling or confusing? Let’s break this down with simple examples.

Example 1 – Machine Learning – Predicting weights based on height

Let us say you want to create a system which tells expected weight based on height of a person. There could be several reasons why something like this could be of interest. You can use this to filter out any possible frauds or data capturing errors. The first thing you do is collect data. Let us say this is how your data looks like:

B4

Each point on the graph represents one data point. To start with we can draw a simple line to predict weight based on height. For example a simple line:

Weight (in kg) = Height (in cm) – 100

can help us make predictions. While the line does a decent job, we need to understand its performance. In this case, we can say that we want to reduce the difference between the Predictions and actuals. That is our way to measure performance.

Further, the more data points we collect (Experience), the better will our model become. We can also improve our model by adding more variables (e.g. Gender) and creating different prediction lines for them.

Example 2 – Storm prediction System

Let us take slightly more complex example. Suppose you are building a storm prediction system. You are given the data of all the storms which have occurred in the past, along with the weather conditions three months before the occurrence of these storms.

Consider this, if we were to manually build a storm prediction system, what do we have to do?

b6

We have to first scour through all the data and find patterns in this data. Our task is to search which conditions lead to a storm.

We can either model conditions like – if the temperature is greater than 40-degree Celsius, humidity is in the range 80 to 100, etc. And feed these ‘features’ manually to our system.

Or else, we can make our system understand from the data what will be the appropriate values for these features.

Now to find these values, you would go through all the previous data and try to predict if there will be a storm or not. Based on the values of the features set by our system, we evaluate how the system performs, viz how many times the system correctly predicts the occurrence of a storm. We can further iterate the above step multiple times, giving performance as feedback to our system.

Let’s take our formal definition and try to define our storm prediction system: Our task ‘T’ here is to find what are the atmospheric conditions that would set off a storm. Performance ‘P’ would be, of all the conditions provided to the system, how many times will it correctly predict a storm. And experience ‘E’ would be the reiterations of our system.

 1.2 What is Deep Learning?

The concept of deep learning is not new. It has been around for a couple of years now. But nowadays with all the hype, deep learning is getting more attention. As we did in Machine Learning, we will look at a formal definition of Deep Learning and then break it down with example.

“Deep learning is a particular kind of machine learning that achieves great power and flexibility by learning to represent the world as nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones.”

Now – that one would be confusing. Let us break it with simple example.

Example 1 – Shape detection

Let me start with a simple example which explains how things happen at a conceptual level. Let us try and understand how we recognize a square from other shapes.

b5

The first thing our eyes do is check whether there are 4 lines associated with a figure or not (simple concept). If we find 4 lines, we further check, if they are connected, closed, perpendicular and that they are equal as well (nested hierarchy of concept).

So, we took a complex task (identifying a square) and broke it in simple less abstract tasks. Deep Learning essentially does this at a large scale.

Example 2 – Cat vs. Dog

Let’s take an example of an animal recognizer, where our system has to recognize whether the given image is of a cat or a dog.

b7

If we solve this as a typical machine learning problem, we will define features such as if the animal has whiskers or not, if the animal has ears & if yes, then if they are pointed. In short, we will define the facial features and let the system identify which features are more important in classifying a particular animal.

Now, deep learning takes this one step ahead. Deep learning automatically finds out the features which are important for classification, where in Machine Learning we had to manually give the features. Deep learning works as follows:

Deep learning works as follows:

  • It first identifies what are the edges that are most relevant to find out a Cat or a Dog
  • It then builds on this hierarchically to find what combination of shapes and edges we can find. For example, whether whiskers are present, or whether ears are present, etc.
  • After consecutive hierarchical identification of complex concepts, it then decides which of this features are responsible for finding the answer.
  1. Comparison of Machine Learning and Deep Learning

Now that you have understood an overview of Machine Learning and Deep Learning, we will take a few important points and compare the two techniques.

 2.1 Data dependencies

The most important difference between deep learning and traditional machine learning is its performance as the scale of data increases. When the data is small, deep learning algorithms don’t perform that well. This is because deep learning algorithms need a large amount of data to understand it perfectly. On the other hand, traditional machine learning algorithms with their handcrafted rules prevail in this scenario. Below image summarizes this fact.

b8

2.2 Hardware dependencies

Deep learning algorithms heavily depend on high-end machines, contrary to traditional machine learning algorithms, which can work on low-end machines. This is because the requirements of deep learning algorithm include GPUs which are an integral part of its working. Deep learning algorithms inherently do a large amount of matrix multiplication operations. These operations can be efficiently optimized using a GPU because GPU is built for this purpose.

2.3 Feature engineering

Feature engineering is a process of putting domain knowledge into the creation of feature extractors to reduce the complexity of the data and make patterns more visible to learning algorithms to work. This process is difficult and expensive in terms of time and expertise.

In Machine learning, most of the applied features need to be identified by an expert and then hand-coded as per the domain and data type.

For example, features can be pixel values, shape, textures, position and orientation. The performance of most of the Machine Learning algorithm depends on how accurately the features are identified and extracted.

Deep learning algorithms try to learn high-level features from data. This is a very distinctive part of Deep Learning and a major step ahead of traditional Machine Learning. Therefore, deep learning reduces the task of developing new feature extractor for every problem. Like, Convolutional NN will try to learn low-level features such as edges and lines in early layers then parts of faces of people and then high-level representation of a face.

b9

2.4 Problem Solving approach

When solving a problem using traditional machine learning algorithm, it is generally recommended to break the problem down into different parts, solve them individually and combine them to get the result. Deep learning in contrast advocates to solve the problem end-to-end.

Let’s take an example to understand this.

Suppose you have a task of multiple object detection. The task is to identify what is the object and where is it present in the image.

b10

In a typical machine learning approach, you would divide the problem into two steps, object detection and object recognition. First, you would use a bounding box detection algorithm like grabcut, to skim through the image and find all the possible objects. Then of all the recognized objects, you would then use object recognition algorithm like SVM with HOG to recognize relevant objects.

On the contrary, in deep learning approach, you would do the process end-to-end. For example, in a YOLO net (which is a type of deep learning algorithm), you would pass in an image, and it would give out the location along with the name of object.

 2.5 Execution time

Usually, a deep learning algorithm takes a long time to train. This is because there are so many parameters in a deep learning algorithm that training them takes longer than usual. State of the art deep learning algorithm ResNet takes about two weeks to train completely from scratch. Whereas machine learning comparatively takes much less time to train, ranging from a few seconds to a few hours.

This is turn is completely reversed on testing time. At test time, deep learning algorithm takes much less time to run. Whereas, if you compare it with k-nearest neighbors (a type of machine learning algorithm), test time increases on increasing the size of data. Although this is not applicable on all machine learning algorithms, as some of them have small testing times too.

2.6 Interpretability

Last but not the least, we have interpretability as a factor for comparison of machine learning and deep learning. This factor is the main reason deep learning is still thought 10 times before its use in industry.

Let’s take an example. Suppose we use deep learning to give automated scoring to essays. The performance it gives in scoring is quite excellent and is near human performance. But there’s is an issue. It does not reveal why it has given that score. Indeed mathematically you can find out which nodes of a deep neural network were activated, but we don’t know what there neurons were supposed to model and what these layers of neurons were doing collectively. So we fail to interpret the results.

On the other hand, machine learning algorithms like decision trees give us crisp rules as to why it chose what it chose, so it is particularly easy to interpret the reasoning behind it. Therefore, algorithms like decision trees and linear/logistic regression are primarily used in industry for interpret ability

3.Where is Machine Learning and Deep Learning being applied right now?

The wiki article gives an overview of all the domains where machine learning has been applied. These include:

  • Computer Vision: for applications like vehicle number plate identification and facial recognition.
  • Information Retrieval: for applications like search engines, both text search, and image search.
  • Marketing: for applications like automated email marketing, target identification
  • Medical Diagnosis: for applications like cancer identification, anomaly detection
  • Natural Language Processing: for applications like sentiment analysis, photo tagging
  • Online Advertising, etc

b11

The image given above aptly summarizes the applications areas of machine learning. Although it covers broader topic of machine intelligence as a whole.

One prime example of a company using machine learning / deep learning is Google.

b12

In the above image, you can see how Google is applying machine learning in its various products. Applications of Machine Learning/Deep Learning are endless, you just have to look at the right opportunity!

4.Pop quiz

To assess if you really understood the difference, we will do a quiz. You can post your answers in this thread.

Please mention the steps below to completely answer it.

  • How would you solve the below problem using Machine learning?
  • How would you solve the below problem using Deep learning?
  • Conclusion: Which is a better approach?

Scenario 1:

You have to build a software component for self-driving car. The system you build should take in the raw pixel data from cameras and predict what would be the angle by which you should steer your car wheel.

Scenario 2:

Given a person’s credentials and background information, your system should assess whether a person should be eligible for a loan grant.

Scenario 3:

You have to create a system that can translate a message written in Russian to Hindi so that a Russian delegate can address the local masses.

5.Future Trends

The above article would have given you an overview of Machine Learning and Deep Learning and the difference between them. In this section, I’m sharing my viewies on how Machine Learning and Deep Learning would progress in the future.

  • First of all, seeing the increasing trend of using data science and machine learning in the industry, it will become increasing important for each company who wants to survive to inculcate Machine Learning in their business. Also, each and every individual would be expected to know the basics terminologies.
  • Deep learning is surprising us each and every day, and will continue to do so in the near future. This is because Deep Learning is proving to be one of the best technique to be discovered with state-of-the-art performances.
  • Research is continuous in Machine Learning and Deep Learning. But unlike in previous years, where research was limited to academia, research in Machine Learning and Deep Learning is exploding in both industry and academia. And with more funds available than ever before, it is more likely to be a keynote in human development overall.

I personally follow these trends closely. I generally get a scoop from Machine Learning/Deep Learning newsletters, which keep me updated with recent happenings. Along with this, I follow arxiv papers and their respective code, which are published every day.

End notes

In this article, we had a high-level overview and comparison between deep learning and machine learning techniques. I hope I could motivate you to learn further in machine learning and deep learning.  Here are the learning path for machine learning & deep learning Learning path for machine learning and Learning path for deep learning.

Source: All the above opinions are personal perspective on the basis of information provided by Analytics Vidhya

https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/