Big Data, Small Target: The Smart Approach To Artificial Intelligence

January 18th, 2018 by blogadmin No comments »

Companies that have invested heavily in big data solutions want to know how to make smart, strategic investments that will distinguish them from the competition and enable the best possible return before making the decision to go all in. In the past, not all enterprise big data initiatives went as planned. These failures are not usually published, but the big data failure rate is unusually high.

According to Gartner, only 15% of businesses make it past the pilot stage of these projects. Our fear, as leaders of technology companies, is that with so much attention surrounding AI, the pressure is on to apply the technology or risk falling behind the many decision makers who are adopting technologies without first establishing clear business goals and understanding the differences between AI and ML and how they should be applied.

It’s easy to get caught up in the allure of artificial intelligence as well as its hype, including breakthroughs like deep learning, but those looking to make an outsized impact should instead focus on its more practical counterpart: good old-fashioned machine learning — or “cheap learning,” as my colleague Ted Dunning and Ellen Friedman explain in their guide Practical Machine Learning: Innovations in Recommendation.

The distinction is simple: Cheap learning is about leveraging basic machine learning techniques on straightforward data sets en masse to generate a large number of small, incremental improvements. Deep learning, on the other hand, is a specific subset of machine learning. Deep learning is a collection of sophisticated and highly intensive machine learning approaches that make business decisions based on highly complex data sets possible.

For tasks that involve analyzing raw data, such as images and voice recordings, deep learning is best. But when it comes to working on simplified, structured types of data, we’ve found cheap machine learning will do the trick. When you consider that the majority of data flowing through enterprises falls into this second category, it’s clear which tool makes the most sense.

As you chart a course forward, here’s what you should be doing today to set your company up for success tomorrow:

Capture More, Better Data

Artificial intelligence is fueled by data. Pick an approach, and you’ll find data at the center. Why? Because large volumes of complete data sets are needed to accurately recognize significant patterns of behavior with people, events or other characterizations, and that’s what AI is all about.

Having access to more data — especially a range of contributing or related data sources —  is usually an advantage. This is why companies like Google (a leading investor in our company), Amazon, Facebook, Alibaba and Baidu are so powerful from an AI perspective. These companies have enormous data sets that they’ve been capturing for decades across a wide variety of trended patterns. This data has fed into their algorithms for years, making them increasingly more refined, accurate and targeted.

For most enterprise companies, the big challenge is that it’s not always clear at the time data is collected what’s going to matter down the road. This makes it hard to know what to measure today and if that measurement will be valuable in the future. This line of thinking represents the old-school way — it presumes there is only a finite amount of data one can feasibly capture and store, but that’s no longer the case with the advent of new technologies. Furthermore, the ability to connect this data, at a meta-schema level, allows a completely new perspective on previously unrelatable data sources. In addition, big data has seen its fair share of innovation in recent years with storage becoming increasingly smarter and cheaper.

Establish Clear Business Objectives

Successful machine learning isn’t just about choosing the right tool or algorithm and feeding it tons of data. Context matters. Putting machine learning to work on large data sets will yield little value without clear objective goals guiding the efforts.

Do you know what success looks like today? How about five or 10 years from now? Machine learning can help you get a clear baseline today and empower data scientists and engineers to point it in the right direction based on data visibility that is continuously being reviewed and refined.

There’s a sense that AI techniques like machine learning will offer businesses a magic bullet that turns everything into a smarter, more efficient version of itself. This is wishful thinking. Today, these tools work best in narrow frameworks; in the long term this will not be true, but it’s today’s reality. The more specific the objective, the more effective the tool and the higher likelihood of success. Operationalizing a vast number of simple but powerful techniques can deliver enormous business value with relatively short development times and ease of deployment and maintenance.

Stay Grounded

The path to real business value is a well-crafted strategy. Once you have a business roadmap with goals and well-defined objectives, the application of AI techniques will make more sense and align with the overall business strategy. There is no worse feeling or decision more career-limiting than using advanced techniques and technologies that are not aligned to your business goals and strategy. These projects are, typically, the most strategic and have the greatest visibility and highest expectations.

Every business wants demonstrated improvements based on hard data to support the results. The bottom line: Use the appropriate technique for the assignment given. Truthfully (and based on our practical experience), deep learning will come in handy and may be the right strategic technological choice. But for most applications in the enterprise, cheap learning will offer a more practical — and effective — solution.  Don’t be afraid to recognize the difference.

Source: All the above opinions are personal perspective on the basis of information provided by Forbes and contributor Tom Fisher

 

A Look At SAP’s 2017 So Far

January 9th, 2018 by blogadmin No comments »

Software giant SAP continued its strong performance in 2017, with its top line growing at more than 9% in the first three quarters of the year and beating market expectations. Aided by a phenomenal increase in new cloud bookings, revenues from its Cloud business remained the primary growth driver.

The company’s cloud and software gross margins saw a marginal decline, but a substantial improvement in its services gross margin led to a slight improvement in overall margins. We expect cloud margins to continue declining in the near term, as the company faces tough competition from software behemoths like Microsoft, Oracle and Salesforce. Operating profit and EPS grew by 19% and 35% year-over-year, respectively, due to lower share-based compensation, acquisition-related charges and restructuring costs.

Owing to a good performance in the first three quarters of the year, as well as overall stock market gains, SAP’s stock is currently trading 16% higher than its price in January. While the revenue growth was seen across all business segments, SAP’s Cloud business, aided by a strong increase in new bookings, was the standout performer.

The company continued its dominance in the Enterprise Resource Planning software market, with more than 1,500 customers adopting its S/4HANA platform this year, taking the overall count to over 6,900 customers. This should assuage some investor concerns about the long-term value of this platform, the sheer power of which is reflected in its cost.

Moreover, with 80% of its customers still using the earlier platform and expected to shift to the newer one in the near future, there is tremendous potential which the company expects to tap.

Cloud Offerings Continue Phenomenal Growth Under Increased Adoption

As more and more companies adopt cloud services, the overall cloud market size has been expanding at a rapid rate. Aided by a 30% increase in new cloud bookings, the SAP’s revenue from Cloud Support and Services grew 28% in constant currency. The growth was evident across all geographies, with its revenues growing by 9% year over year in EMEA, 7% in the Americas and a robust 12% in Asia-Pacific.

SAP is also rapidly expanding its presence in the Internet of Things (IoT) space with new products and partnerships. This is a multi-billion dollar market, which could very well be responsible for driving the next phase of SAP’s Cloud revenue growth.

The recent addition of multiple Internet of Things (IoT) solutions to the SAP Leonardo digital innovation system highlights SAP’s renewed focus on bolstering its foothold in the IoT domain, which could drive the company’s top line in the future.

Combined with its ongoing efforts to strengthen its offerings in the machine learning space, SAP is likely to fare well going forward despite the heavy competition.

Source: All the above opinions are personal perspective on the basis of information provided by Forbes and contributorTrefis Team

 

Artificial Intelligence and The Future Of Jobs

December 28th, 2017 by blogadmin No comments »

The job of IoT Evangelist working for SAP is  to go around and speak about how the Internet of Things is changing the way to live, work, and run our businesses. IoT Evangelist is a job title that didn’t exist 5 or 10 years ago – mainly because the Internet of Things wasn’t a “thing” 5 or 10 years ago. Today it is, so is the job of an IoT Evangelist.

The fact is, technological change has a tremendous impact on the way we spend our working lives. Many of today’s jobs didn’t exist in the past. Of course, the reverse is true as well: a lot of jobs – mostly tedious/manual labor of some variety, think miners, lift operators, or similar – have gone away.

Robots and much more

Much of the discussion today about the relationship between technology and jobs is a discussion about the impact of artificial intelligence (AI). Robots in manufacturing is the most obvious example. A lot of AI has to do with big data analysis and identifying patterns. Thus, AI is used in data security, financial trading, fraud detection, and those recommendations you get from Google, Netflix and Amazon.

But it’s also used in healthcare for everything from identifying better subjects for clinical trials to speeding drug discovery to creating personalized treatment plans. It’s used in autonomous vehicles as well – to adjust, say, to new local conditions on the road. Some say it’s also coming for professional jobs. Think about successfully appealing parking fines (currently home turf for lawyers), automated contract creation, or automated natural language processing (which someday could be used to write this blog itself – gulp!).

The spinning jenny

Will AI continue to take jobs away? Probably. But how many new jobs will it create? Think back to the spinning jenny – the multi-spindle spinning frame that, back in the mid-18th century, started to reduce the amount of work required to make cloth.

By the early 19th century, a movement known as the Luddites emerged where groups of weavers would go around smashing these machines as a form of protest against what we’d now call job displacement. But these machines helped launch the industrial revolution.

As a result of the spinning jenny’s increased efficiency, more people could buy more cloth – of higher quality, at a fraction of the cost. This led to a massive uptick in demand for yarn – which required the creation of distribution networks, and ultimately the need for shipping, an industry that took off in the industrial revolution.

As the spinning jenny came into use, it was continuously improved – eventually enabling a single operator to manage up to 50 spindles of yarn at a time. Other machines appeared on the scene as well. This greater productivity, and the evolution of distribution networks also meant there was a need for increasingly comprehensive supply chains to feed this productivity boom.

Muscle vs caring

Economists at Deloitte looked at this issue of technological job displacement – diving into UK census data for a 140-year period stretching from 1871 to 2011. What they found, not surprisingly perhaps, is that over the years technology has steadily taken over many of the jobs that require human muscle power.

Agriculture has felt the impact most acutely. With the introduction of seed drills, reapers, harvesters and tractors, the number of people employed as agricultural laborers has declined by 95% since 1871.

But agriculture is not alone. The jobs of washer women and laundry workers, for example, have gone away as well. Since 1901, the number of people in England and Wales employed for washing clothes has decreased 83% even though the population has increased by 73%.

Many of today’s jobs, on the other hand, have moved to what are known as the caring professions, as the chart below shows. The light blue bars represent muscle-powered jobs such as cleaners, domestics, miners, and laborers of all sorts; the dark blue, caring professions such as nurses, teachers, and social workers. As you can see, these have flipped.

The Deloitte study also points out that as wealth has increased over the years, so have jobs in the professional services sector. According to the census records analyzed, in England and Wales accountants have increased from 9,832 in 1871 to 215,678 in 2015. That’s a 2,094% increase.

And because people have more money in general, they eat out more often – leading to a fourfold increase in pub staff. They can also afford to care more about how they look. This has led to an increase in the ratio of hairdressers/barbers to citizens of 1:1,793 in 1871 to 1:287 today. Similar trends can be seen in other industries such as leisure, entertainment, and sports.

Where are we headed now?

Will broader application of AI and other technologies continue the trend of generating new jobs in unexpected ways? Most assuredly. Already we’re seeing an increased need for jobs such as AI ethicists – another role that didn’t exist 5-10 years ago.

The fact of the matter is that technology in general, and AI in particular will contribute enormously to a hugely changing labour landscape. As mentioned at the start of this post that the role in SAP as an IoT Evangelist – this is a role to no longer exist in 5 years time, because by then everything will be connected, and so the term Internet of Things will be redundant, in the same way terms like “Internet connected phone”, or “interactive website” are redundant today.

The rise of new technologies will create new jobs, not just for people working directly with the new technologies, but also there will be an increasing requirement for training, re-training, and educational content development to bring people up-to-speed.

Will there be enough of those jobs to go around – and will they pay enough to support a middle-class existence for those who hold them? That’s another question – but it’s one that’s stimulating a lot of creative, innovative ideas of its own as people think seriously about where technology is taking us.

Source: All the above opinions are personal perspective on the basis of information provided by Forbes and writer Tom Raftery

https://www.forbes.com/sites/sap/2017/12/14/artificial-intelligence-and-the-future-of-jobs/#777c8abf4923

 

 

Get SAP Certification from your home ONLINE and at 1/6 of the cost!

December 18th, 2017 by blogadmin No comments »

To my surprise, SAP has started delivering most of the certifications on SAP Cloud hub, a SAP Portal for Certification and Education, where you can register, pay and appear for the online proctor monitored exam from the comfort of your own home!

The best part is that you get a package of 6 exams in one fee of CAD$ 720 or USD$535 (subject to change as per SAP policies).

You can appear up to 3 times for the same exam (just in case you fail) or 6 separate modules of SAP.

Compare this cost with SAP training center or Pearson Vue based certification, where you pay CAD$ 720 for each exam or attempt irrespective if you fail or pass.

Here is more information:

  1. Find your SAP certification: List of SAP Certifications (Cloud certifications are marked by cloud)
  2. Book Cloud Certification (Canada)

Book Cloud Certification (US)

3. Appear for exam

Here are more blogs on similar topics:

SAP Cloud Certification

A general blog, how to pass SAP Certification

Source: All the above opinions are personal perspective on the basis of information provided by Praveen Kumar

https://www.linkedin.com/pulse/get-sap-certification-from-your-home-online-16-cost-praveen/?published=t

 

 

 

 

A Complete Beginner’s Guide to Blockchain

December 12th, 2017 by blogadmin No comments »

You may have heard the term ‘blockchain’ and dismissed it as a fad, a buzzword, or even technical jargon. But blockchain is a technological advance that will have wide-reaching implications that will not just transform the financial services but many other businesses and industries.

A blockchain is a distributed database, meaning that the storage devices for the database are not all connected to a common processor.  It maintains a growing list of ordered records, called blocks. Each block has a timestamp and a link to a previous block.

Cryptography ensures that users can only edit the parts of the blockchain that they “own” by possessing the private keys necessary to write to the file. It also ensures that everyone’s copy of the distributed blockchain is kept in synch.

Imagine a digital medical record: each entry is a block. It has a timestamp, the date and time when the record was created. And by design, that entry cannot be changed retroactively, because we want the record of diagnosis, treatment, etc. to be clear and unmodified. Only the doctor, who has one private key, and the patient, who has the other, can access the information, and then information is only shared when one of those users shares his or her private key with a third party — say, a hospital or specialist. This describes a blockchain for that medical database.

Blockchains are secure databases by design.  The concept was introduced in 2008 by Satoshi Nakamoto, and then implemented for the first time in 2009 as part of the digital bitcoin currency; the blockchain serves as the public ledger for all bitcoin transactions. By using a blockchain system, bitcoin was the first digital currency to solve the double spending problem (unlike physical coins or tokens, electronic files can be duplicated and spent twice) without the use of an authoritative body or central server.

The security is built into a blockchain system through the distributed timestamping server and peer-to-peer network, and the result is a database that is managed autonomously in a decentralized way.  This makes blockchains excellent for recording events — like medical records — transactions, identity management, and proving provenance. It is, essentially, offering the potential of mass disintermediation of trade and transaction processing.

Some people have called blockchain the “internet of value” which is a good metaphor.

On the internet, anyone can publish information and then others can access it anywhere in the world. A blockchain allows anyone to send value anywhere in the world where the blockchain file can be accessed. But you must have a private, cryptographically created key to access only the blocks you “own.”

By giving a private key which you own to someone else, you effectively transfer the value of whatever is stored in that section of the blockchain.

So, to use the bitcoin example, keys are used to access addresses, which contain units of currency that have financial value. This fills the role of recording the transfer, which is traditionally carried out by banks.

It also fills a second role, establishing trust and identity, because no one can edit a blockchain without having the corresponding keys. Edits not verified by those keys are rejected.  Of course, the keys — like a physical currency — could theoretically be stolen, but a few lines of computer code can generally be kept secure at very little expense.  (Unlike, say, the expense of storing a cache of gold in a proverbial Fort Knox.)

This means that the major functions carried out by banks — verifying identities to prevent fraud and then recording legitimate transactions — can be carried out by a blockchain more quickly and accurately.

Why is blockchain important?

We are all now used to sharing information through a decentralized online platform: the internet. But when it comes to transferring value – money – we are usually forced to fall back on old fashioned, centralized financial establishments like banks. Even online payment methods which have sprung into existence since the birth of the internet – PayPal being the most obvious example – generally require integration with a bank account or credit card to be useful.

Blockchain technology offers the intriguing possibility of eliminating this “middle man”. It does this by filling three important roles – recording transactions, establishing identity and establishing contracts – traditionally carried out by the financial services sector.

This has huge implications because, worldwide, the financial services market is the largest sector of industry by market capitalization. Replacing even a fraction of this with a blockchain system would result in a huge disruption of the financial services industry, but also a massive increase in efficiencies.

But it is the third role, establishing contracts, that extends its usefulness outside the financial services sector. Apart from a unit of value (like a bitcoin), blockchain can be used to store any kind of digital information, including computer code.

That snippet of code could be programmed to execute whenever certain parties enter their keys, thereby agreeing to a contract.  The same code could read from external data feeds — stock prices, weather reports, news headlines, or anything that can be parsed by a computer, really — to create contracts that are automatically filed when certain conditions are met.

These are known as “smart contracts,” and the possibilities for their use are practically endless.

For example, your smart thermostat might communicate energy usage to a smart grid; when a certain number of wattage hours has been reached, another blockchain automatically transfers value from your account to the electric company, effectively automating the meter reader and the billing process.

Or, let’s return to our medical records example; if a doctor or patient issues a private key to a medical device, say a blood glucose monitor, the device could automatically and securely record a patient’s blood glucose levels, and then, potentially, communicate with an insulin delivery device to maintain blood glucose at a healthy level.

Or, it might be put to use in the regulation of intellectual property, controlling how many times a user can access, share, or copy something. It could be used to create fraud-proof voting systems, censorship-resistant information distribution, and much more.

The point is that the potential uses for this technology are vast, and that more and more industries will find ways to put it to good use in the very near future.

Source: All the above opinions are personal perspective on the basis of information provided by Forbes and writer Bernard Marr

https://www.forbes.com/sites/bernardmarr/2017/01/24/a-complete-beginners-guide-to-blockchain/#34e92e336e60

How Safe are Blockchains? It Depends

December 6th, 2017 by blogadmin No comments »

or intermittently active nodes. Nodes may go offline for innocuous reasons, but the network must be structured to function (to obtain consensus on previously verified transactions and to correctly verify new transactions) without the offline nodes, and it must be able to quickly bring these nodes back up to speed if they return.

Consensus Protocols and Access Permissions in Public vs. Private Blockchains

The process used to get consensus (verifying transactions through problem solving) is purposely designed to take time, currently around 10 minutes. Transactions are not considered fully verified for about one to two hours, after which point they are sufficiently “deep” enough in the ledger that introducing a competing version of the ledger, known as a fork, would be computationally expensive. This delay is both a vulnerability of the system, in that a transaction that initially seems to be verified may later lose that status, and a significant obstacle to the use of bitcoin-based systems for fast-paced transactions, such as financial trading.

In a private blockchain, by contrast, operators can choose to permit only certain nodes to perform the verification process, and these trusted parties would be responsible for communicating newly verified transactions to the rest of the network. The responsibility for securing access to these nodes, and for determining when and for whom to expand the set of trusted parties, would be a security decision made by the blockchain system operator.

Transaction Reversibility and Asset Security in Public vs. Private Blockchains

While blockchain transactions can be used to store data, the primary motivation for bitcoin transactions is the exchange of bitcoin itself; the currency’s exchange rate has fluctuated over its short lifetime but has increased in value more than fivefold over the past two years. Each bitcoin transaction includes unique text strings that are associated with the bitcoins being exchanged. Similarly, other blockchain systems record the possession of assets or shares involved in a transaction. In the bitcoin system, ownership is demonstrated through the use of a private key (a long number generated by an algorithm designed to provide a random and unique output) that is linked to a payment, and despite the value of these keys, like any data, they can be stolen or lost, just like cash. These thefts are not a failure of the security of bitcoin, but of personal security; the thefts are the result of storing a private key insecurely. Some estimates put the value of lost bitcoins at $950 million.

Private blockchain operators therefore must decide how to resolve the problem of lost identification credentials, particularly for systems that manage physical assets. Even if no one can prove ownership of a barrel of oil, the barrel will need to reside somewhere. Bitcoin currently provides no recourse for those who have lost their private keys; similarly, stolen bitcoins are nearly impossible to recover, as transactions submitted with stolen keys appear to a verifying node to be indistinguishable from legitimate transactions.

Private blockchain owners will have to make decisions about whether, and under what circumstances, to reverse a verified transaction, particularly if that transaction can be shown to be a theft. Transaction reversal can undermine confidence in the fairness and impartiality of the system, but a system that permits extensive losses as a result of the exploitation of bugs will lose users. This is illustrated by the recent case of the DAO(Decentralized Autonomous Organization), a code-based venture capital fund designed to run on Ethereum, a public blockchain-based platform. Security vulnerabilities in the code operating the DAO led to financial losses that required Ethereum’s developers to make changes to the Ethereum protocol itself, even though the DAO’s vulnerabilities were not the fault of the Ethereum protocol. The decision to make these changes was controversial, and underscores the idea that both public and private blockchain developers should consider circumstances under which they would face a similar decision.

Weighing the Rewards

The benefits offered by a private blockchain — faster transaction verification and network communication, the ability to fix errors and reverse transactions, and the ability to restrict access and reduce the likelihood of outsider attacks — may cause prospective users to be wary of the system. The need for a blockchain system at all presupposes a degree of mistrust, or at least an acknowledgement that all users’ incentives may not be aligned. Developers who work to maintain public blockchain systems like bitcoin still rely on individual users to adopt any changes they propose, which serves to ensure that changes are only adopted if they are in the interest of the entire system. The operators of a private blockchain, on the other hand, may choose to unilaterally deploy changes with which some users disagree. To ensure both the security and the utility of a private blockchain system, operators must consider the recourse available to users who disagree with changes to the system’s rules or are slow to adopt the new rules. The number of operating systems currently running without the latest patch is a strong indication that even uncontroversial changes will not be adopted quickly.

While the risks of building a financial market or other infrastructure on a public blockchain may give a new entrant pause, private blockchains offer a degree of control over both participant behavior and the transaction verification process. The use of a blockchain-based system is a signal of the transparency and usability of that system, which are bolstered by the early consideration of the system’s security. Just as a business will decide which of its systems are better hosted on a more secure private intranet or on the internet, but will likely use both, systems requiring fast transactions, the possibility of transaction reversal, and central control over transaction verification will be better suited for private blockchains, while those that benefit from widespread participation, transparency, and third-party verification will flourish on a public blockchain.

Source: All the above opinions are personal perspective on the basis of information provided byHarvard Business Review and writer Allison Berke

https://hbr.org/2017/03/how-safe-are-blockchains-it-depends?referral=03759&cm_vc=rr_item_page.bottom

5 Essential Blockchain Predictions That Will Define 2018

November 29th, 2017 by blogadmin No comments »

The potential for blockchain technology to bring about widespread change has been predicted since 2011 and the emergence of Bitcoin. But it was this year when the concept really started to capture people’s attention.

Perhaps spurred on by the meteoric rise in price of Bitcoin – the first tangible example of a blockchain technology – hype grew around encrypted, distributed ledgers in the financial sector.

B1

Blockchain-focused financial services startups raised $240 million in venture funding during the first half of the year. However, its potential was beginning to be recognized across other sectors and industries.

2018 is likely to see a continuation of this trend of innovation and disruption. Here are the five key ways this is likely to happen.

  1. More use outside of finance

While it’s implications for the financial sector might seem most apparent, any industry or organization in which recording and oversight of transactions is necessary could benefit. In healthcare, IDC Health Insights predicts that 20% of organizations will have moved beyond pilot projects and will have operationalized blockchain by 2020, so 2018 should see significant progress in that direction.

In recruitment and HR, blockchain CVs have been developed which will streamline the selection process by verifying candidates’ qualifications and relevant experience.

Legal work which involves tracking transfer of ownership – for example intellectual property law, or rea estate deeds – will also be made more efficient through implementation of distributed ledgers. Next year we should expect to see inroads by innovators in the legal field making this a reality.

Meanwhile in manufacturing and industry, the Blockchain Research Institute, the founders of which include IBM, Pepsi Co and FedEx, say it expects blockchain to become the “second generation” of the digital revolution following the development of the internet. It has highlighted work by electronics manufacturer Foxconn to use blockchain to track transactions in its supply chain.

  1. Blockchain meets the Internet of Things

Though this sounds like a clash of the buzzwords, serious thinking is going into how these technologies could be made to work together to improve business processes, and day-to-day life.

Security is one reason they are a good fit – blockchain’s encrypted and trustless nature makes it a viable option when it comes to keeping the ever-growing number of connected devices in our homes and offices safe. Research envisages that blockchain compute power that is used to “mine” Bitcoin could be put to use safeguarding our smart homes from a new generation of cyber-burglars looking to break in and steal our data.

Another proposed use is that the cryptocurrencies built on blockchains would prove ideal for automated micro-transactions made between machines. As well as recording machine activity on the ledger for record-keeping and analytical purposes, machines could effectively “pay” each other when smart machines operated by one organization interact and transact with those owned by others. This is likely to be further down the road, but it is likely we will see research and breakthroughs in this area in 2018.

  1. Smart contracts will come into their own

“Smart contracts” are another possibility brought about by blockchain – the idea is that contracts will execute automatically when conditions are filled, meaning payments will be made, or deliveries dispatched, or anything else in business which is typically defined by a contract.

Blockchains make smart contracts possible because of their consensus-driven nature. Once agreed-on conditions are met, then the contract is filled. This could mean paying bonuses when targets are hit, or despatching an order once a payment has hit your account.

Insurers AIG are piloting a blockchain smart contract system to oversee the creation of complex insurance policies which require international cooperation, and we should expect more to follow in their steps next year.

b2

  1. State-Sanctioned Crypto Currencies?

Putin was the first – with the recent announcement of the “Crypto Rouble” – but it was inevitable that politicians would at some point start to consider the advantages of blockchain-derived currencies. In the wake of Bitcoin, it has often seemed that nation states have been lacking in their enthusiasm for this particular application – and probably with good cause.

Bitcoin was after all envisaged as a way of creating a tradeable currency which couldn’t be manipulated by governments. Some such as China have been outright hostile – refusing to allow exchanges to operate in their borders and issuing warnings about the high risk of investing in cryptocurrencies. 2018 however could be the year that governments finally get on board the blockchain bandwagon – as its potential for creating efficiencies in both financial and public services become more apparent.

Source: All the above opinions are personal perspective on the basis of information provided by Forbes and writer Bernard Marr

https://www.forbes.com/sites/bernardmarr/2017/11/22/5-essential-blockchain-predictions-that-will-define-2018/#1e43d0057c93

 

 

 

SAP Cloud Platform Helps HR Leaders Connect and Extend SAP SuccessFactors Solutions

September 8th, 2017 by blogadmin No comments »

With simple access and easy connectivity to now more than 100 applications that complement the SAP SuccessFactors HCM Suite, HR leaders can meet specific and immediate business needs more easily than ever before. This announcement was made at SuccessConnect in Las Vegas taking place August 29–31 at The Cosmopolitan of Las Vegas.

“SAP has a vibrant partner ecosystem building innovative app extensions to SAP SuccessFactors solutions on SAP Cloud Platform,” said SAP SuccessFactors* President Greg Tomb. “These extensions help customers solve business problems, differentiate and innovate, and accelerate their digital HR transformation. With the power of our strong ecosystem, we enable our customers with cutting-edge technology that helps them execute against strategy and put their people at the center of business.”

SAP and its partners continue to drive rapid innovation and increase the strategic value of human resources across the enterprise. Among the extensions that will be on display at SuccessConnect in Las Vegas are:

  • EnterpriseAlumni by EnterpriseJungle: Integration of corporate alumni into HR landscapes immediately transforms and expands talent supply, including boomerang hires. Alumni management helps drive recruitment, business development and corporate evangelism.
  • org.manager by Ingentis: Workforce planning, org modeling and charting based on SAP SuccessFactors data support mergers and acquisitions by enabling visualization of multiple org structures and allowing drag and drop from one org chart to another.
  • Employee Engagement Suite by Semos: A new approach to employee engagement supports the growing HR needs in the areas of recognition and rewards, continuous feedback, organizational surveys, health and wellness, and employee work productivity.
  • Labor Management by Sodales: This employee relationship management solution offers features covering grievances, progressive disciplines, performance reviews and seniority rules management, including those involving unionized work environments. Management can perform investigations of each grievance with online collective bargaining agreements, manage grievance costs, conduct disciplinary actions and record all steps in a single place.

As the technology foundation for digital innovation with the SAP Leonardo system, SAP Cloud Platform allows organizations to connect people with processes and things beyond the SAP SuccessFactors HCM Suite and leverage transformative technologies including the Internet of Things (IoT), machine learning, Big Data and analytics, blockchain and data intelligence. The value of SAP Cloud Platform to customers includes:

  • Application extensions: Accelerate time to value for building and deploying apps and extensions that engage employees in new ways, allowing HR to be flexible and innovative without compromising their core HR process.
  • User experience: Achieve an intuitive, consistent and harmonized user experience across SAP SuccessFactors solutions and platform extensions. This empowers organizations to personalize their end-to-end HR landscape with a seamless, secure and beautiful user experience.
  • Data-driven insights: Leverage data from SAP SuccessFactors solutions and any other source to help make insightful business decisions that drive intelligent people strategies.

“Enterprise Information Resources Inc. (EIR) is an SAP Cloud Application partner, expert in optimizing and transforming compensation systems to provide business value immediately,” said France Lampron, president and CEO of Enterprise Information Resources Inc. “Capitalizing on the extensibility of SAP SuccessFactors solutions by leveraging SAP Cloud Platform provides us with an ideal development platform for EIR’s extension application — EIR Compensation Analytics.”

Solution extensions from SAP partners can be found in SAP App Center. HR professionals around the globe can virtually join key sessions at this year’s SuccessConnect in Las Vegas by registering here.

Source: All the above opinions are personal perspective on the basis of information provided by SAP

https://news.sap.com/sap-cloud-platform-helps-hr-leaders-connect-and-extend-sap-successfactors-solutions/

 

 

 

 

Bridging two worlds : Integration of SAP and Hadoop Ecosystems

September 6th, 2017 by blogadmin No comments »

Source: http://www.saphanacentral.com/2017/07/bridging-two-worlds-integration-of-sap.html

Proliferation of web applications, social media and internet-of-things coupled with large-scale digitalization of business processes has lead to an explosive growth in generation of raw data. Enterprises across the industries are starting to recognize every form of data as a strategic asset and increasingly leveraging it for complex data driven business decisions. Big Data solutions are being used to realize enterprise ‘data lakes’, storing processed or raw data from all available sources and powering a variety of applications/use-cases.

Big Data solutions will also form a critical component of enterprise solutions for predictive analytics and IoT deployments in near future. SAP on-premise and on-demand solutions, especially HANA platform will need closer integration with the Hadoop ecosystem.

What exactly is ‘Big Data’ ?
Data sets can be characterized by their volume, velocity and variety. Big Data refers to the class of data with one or more of these attributes significantly higher than the traditional data sets.B1

Big data presents unique challenges for all aspects of data processing solutions including acquisition, storage, processing, search, query, update, visualization, transfer and security of the data.

Hadoop Big Data Solution to the Rescue !

Hadoop is an open-source software framework for distributed storage and processing of Big Data using large clusters of machines.

Hadoop’s Modus Operandi –> Divide and Conquer : Break task in small chunks – store / process in parallel over multiple nodes – combine results

Hadoop is not the only available Big Data Solution. Several commercial distributions/variants of Hadoop and other potential alternatives exist which are architecturally very different.

Combining high speed in-memory processing capabilities of SAP HANA with Hadoop’s ability to cost-effectively store and process huge amounts of structured as well as unstructured data has limitless possibilities for business solutions.

Hadoop system can be an extremely versatile addition to any SAP business system landscape while acting as –

  • simple databaseand/or an low cost archive to extend the storage capacity of SAP systems for retaining large volumes historical or infrequently used data
  • flexible data storeto enhance the capabilities of persistence layer of SAP systems and provide efficient storage for semi-structured and unstructured data like xml-json-text-images
  • A massive data processing / analytics engineto extend or replace analytical / transformational capabilities of SAP systems including SAP HANA

b2

Getting acquainted : Hadoop Ecosystem

Hadoop at its core is a Java based software library which provides utilities/modules for distributed storage and parallel data processing across a cluster of servers. However, in common parlance the term ‘Hadoop’ almost invariably refers to an entire ecosystem which includes a wide range of apache open-source and/or commercial tools based on the core software library.

Hadoop is currently available either as a set of open-source packages or via several enterprise grade commercial distributions. Hadoop solution are available as SaaS/PaaS cloud offerings from multiple vendors in addition to traditional offerings for on-premise deployments.

Snapshot of prominent distributions/services according to latest market guides from Gartner and Forrester Research :

  • Apache Hadoop [open source]
  • Cloudera Enterprise | Cloudera CDH [open source]
  • Hortonworks Data Platform HDP [open source]
  • MapR
  • IBM Big Insights ^^
  • Amazon Elastic MapReduce (EMR)
  • Microsoft Azure HDInsight
  • Google Cloud Dataproc
  • Oracle Big Data Cloud Services
  • SAP Cloud Platform Big Data Service (formerly SAP Altiscale Data Cloud)(^^ Potentially Discontinued Solution)

Hadoop core components serve as foundation for entire ecosystem of data access and processing solutions.

  • Hadoop HDFS is a scalable, fault-tolerant, distributed storage system which stores data using native operating system files over a large cluster of nodes. HDFS can support any type of data and provides high degree of fault-tolerance by replicating files across multiple nodes.
  • Hadoop YARN and Hadoop Common provide foundational framework and utilities for resource management across the cluster

Hadoop MapReduce is a framework for development and execution of distributed data processing applications. Spark and Tez which are alternate processing frameworks based on data-flow graphs are considered to be the next generation replacement of MapReduce as the underlying execution engine for distributed processing in Hadoop.

Map       : Split and distribute job

Reduce   : Collect and combine results

B3

Variety of data access / processing engines can run alongside Hadoop MapReduce engine to process HDFS datasets. Hadoop ecosystem is continuously evolving with components frequently having some complementing, overlapping and/or similar appearing capabilities but with vastly different underlying architectures or approach.

Popular components-applications-engines-tools within Hadoop ecosystem (NOT an exhaustive list; several more open source and vendor specific applications are used by enterprises for specific use-cases)

  • Pig —Platform for development and execution of high-level language (Pig Latin) scripts for complex ETL and data analysis jobs on Hadoop datasets
  • Hive —Read only relational database that runs on top of Hadoop core and enables SQL based querying to Hadoop datasets; Supported by Hive-Metastore
  • Impala — Massively Parallel Processing (MPP) analytical database and interactive SQL based query engine for real-time analytics
  • HBase —NoSQL (non-relational) db which provides real-time random read/write access to datasets in Hadoop; Supported by HCatalog
  • Spark —In-memory data processing engine which can run either over Hadoop or standalone as an alternative / successor to Hadoop itself
  • Solr —Search engine / platform enabling powerful full-text search and near real-time indexing
  • Storm —Streaming data processing engine for continuous computations & real-time analytics
  • Mahout — Library of statistical, analytical and machine learning software that runs on Hadoop and can be used for data mining and analysis
  • Giraph —Iterative graph processing engine based on MapReduce framework
  • Cassandra —Distributed NoSQL (non-relational) db with extreme high availability capabilities
  • Oozie —Scheduler engine to manage jobs and workflows
  • Sqoop —Extensible application for bulk transfer of data between Hadoop and structured data stores and relational databases
  • Flume — Distributed service enabling ingestion of high-volume streaming data into HDFS
  • Kafka — Stream processing and message brokering system
  • Ambari —Web based tool for provisioning, managing and monitoring Hadoop clusters and various native data access engines
  • Zookeeper –Centralized service maintaining Hadoop configuration information and enabling coordination among distributed Hadoop processes
  • Ranger – Centralized framework to define, administer and manage fine-grained access control and security policies consistently across Hadoop components
  • Knox –Application gateway which act as reverse proxy, provides perimeter security for Hadoop cluster and enables integration with SSO and IDM solutions

Bridging two worlds – Hadoop and SAP Ecosystems

SAP solutions, especially SAP HANA platform, can be ‘integrated’ with Hadoop ecosystem using a variety of solutions and approaches depending upon the specific requirements of any use case.

SAP solutions which should be considered for the integration include :

  • SAP BO Data Services
  • SAP BO BI Platform | Lumira
  • SAP HANA Smart Data Access
  • SAP HANA Enterprise Information Management
  • SAP HANA Data Warehousing Foundation – Data Lifecycle Manager
  • SAP HANA Spark Controller
  • SAP Near-Line Storage
  • SAP Vora
  • …. <possibly more !>
  • Apache Sqoop [Not officially supported by SAP for HANA]

SAP HANA can leverage HANA Smart Data Access to federate data from Hadoop (access Hadoop as a data source) without copying the remote data into HANA. SDA enables data federation (read/write) using virtual tables and supports Apache Hadoop/Hive and Apache Spark as remote data sources in addition to many other database systems including SAP IQ, SAP ASE, Teradata, MS SQL, IBM DB2, IBM Netezza and Oracle.

Hadoop can be used as a remote data source for virtual tables in SAP HANA using following adaptors (in-built within HANA):

  • Hadoop/Spark ODBC Adaptor — Require installation of Unix ODBC drivers + Apache Hive/Spark ODBC Drivers on HANA server
  • SPARK SQL Adaptor — Require installation of SAP HANA Spark Controller on Hadoop Cluster (Recommended Adaptor)
  • Hadoop Adaptor (WebHDFS)
  • Vora Adaptor

SAP HANA can also leverage HANA Smart Data Integration to replicate required data from Hadoop into HANA. SDI provides pre-built adaptors & adaptor SDK to connect to a variety of data sources including Hadoop. HANA SDI requires installation of Data Provisioning Agent (containing standard adaptors) and native drivers for the remote data source, on a standalone machine. SAP HANA XS engine based DWF-DLM can relocate data from HANA to/between HANA Dynamic Tiering, HANA Extension nodes, SAP IQ and Hadoop via Spark SQL adaptor / Spark Controller.

B4

SAP Vora is an in-memory query engine that runs on top of Apache Spark framework and provides enriched interactive analytics on the data in Hadoop. Data in Vora can be accessed in HANA either directly via Vora Adaptor or via SPARK SQL Adaptor (HANA Spark Controller). Supports Hortonworks, Cloudera and MapR.

SAP BO Data Services (BODS) is a comprehensive data replication/integration (ETL processing) solution. SAP BODS has capabilities (via the packaged drivers-connectors-adaptors) to access data in Hadoop, push data to Hadoop, process datasets in Hadoop and push ETL jobs to Hadoop using Hive/Spark queries, pig scripts, MapReduce jobs and direct interaction with HDFS or native OS files. SAP SLT does not have native capabilities to communicate with Hadoop.

SAP Business Objects (BI Platform, Lumira, Crystal Reports, …) can access and visualize data from Hadoop (HDFS – Hive – Spark – Impala – SAP Vora) with ability to combine it with data from SAP HANA and other non-SAP sources. SAP BO applications use their in-built ODBC/JDBC drivers or generic connectors to connect to Hadoop ecosystem. Apache Zeppelin can used for interactive analytics visualization from SAP Vora.

Apache Sqoop enables bulk transfer of data between unstructured, semi-structured and structured data stores. Sqoop can be used to transfer data between Apache Hadoop and relational databases including SAP HANA (although not officially supported by SAP).

Getting started : Hadoop Deployment Overview

Hadoop is a distributed storage and processing framework which would typically be deployed across a cluster consisting of upto several hundreds/thousands of independent machines, each participating in data storage as well as processing. Each cluster node could either be a bare-metal commodity server or a virtual machine. Some organizations prefer to have a small number of larger-sized clusters; others choose a greater number of smaller clusters based on workload profile and data volumes.

HDFS Data is replicated to multiple nodes for fault-tolerance. Hadoop clusters are typically deployed with a HDFS replication factor of three which means each data block has three replicas – the original plus two copies. Accordingly, storage requirements of a Hadoop cluster is more than four times the anticipated input/managed dataset size. Recommended storage option for Hadoop clusters is to have nodes with local (Direct Attached Storage – DAS) storage. SAN / NAS storage can be used but is not common (inefficient ?) since Hadoop cluster is inherently a ‘share nothing architecture’.

Nodes are distinguished by their type and role. Master nodes provide key central coordination services for the distributed storage and processing system while worker nodes are the actual storage and compute nodes.

Node roles represent set of services running as daemons or processes. Basic Hadoop cluster consists of NameNode (+ Standby), DataNode, ResourceManager (+ Standby) and NodeManager roles. NameNode coordinates data storage on DataNodes while ResourceManager node coordinates data processing on NodeManager nodes within the cluster. Majority of nodes within the cluster are workers which typically will perform both DataNode and NodeManager roles however there can be data-only or compute-only nodes as well.

Deployment of various other components/engines from Hadoop ecosystem brings more services and node types/roles in play which can be added to cluster nodes. Node assignment for various services of any application is specific to that application (refer to installation guides). Many such components also need their own database for operations which is a part of component services. Clusters can have dedicated nodes for application engines with very specific requirements like in-memory processing and streaming.

Master / Management nodes are deployed on enterprise class hardware with HA protection; while the worker can be deployed on commodity servers since the distributed processing paradigm itself and HDFS data replication provide the fault-tolerance. Hadoop has its own built-in failure-recovery algorithms to detect and repair Hadoop cluster components.

Typical specification of Hadoop Cluster nodes depending on whether workload profile is storage intensive or compute intensive. All nodes do not necessarily need to have identical specifications.

  • 2 quad-/hex-/octo-core CPUs   and   64-512 GB RAM
  • 1-1.5 disk per core; 12-24 1-4TB hard disks; JBOD (Just a Bunch Of Disks) configuration for Worker nodes and RAID protection for Master nodesb5

Hadoop Applications (Data access / processing engines and tools like Hive, Hbase, Spark and Storm, SAP HANA Spark Controller and SAP Vora) can be deployed across the cluster nodes either using the provisioning tools like Ambari / Cloudera Manager or manually.

Deployment requirement could be : All nodes, at-least one node, one or more nodes, multiple nodes, specific type of node

B6

Getting started : Deployment Overview of ‘bridging’ solutions from SAP

SAP Vora consists of a Vora Manager service and a number of core processing services which can be added to various compute nodes of the existing Hadoop deployment

— Install SAP Vora Manager on management node using SAP Vora installer

— Distribute SAP Vora RPMs to all cluster nodes and install using Hadoop Cluster Provisioning tools — Deploy Vora Manager on cluster nodes using Hadoop Cluster Provisioning tools

— Start Vora Manager and Deploy various Vora services across the cluster using Vora Manager

SAP HANA Spark Controller provides SQL interface to the underlying Hive/Vora tables using Spark SQL and needs to be added to at-least one of the master nodes of the existing Hadoop deployment

— Install Apache Spark assembly files (open source libraries); not provided by SAP installer

— Install SAP HANA Spark Controller on master node using Hadoop Cluster Provisioning tools

SAP HANA Smart Data Integration and Smart Data Access are native components of SAP HANA and do not require separate installation. Smart Data Integration do however require activation of Data Provisioning server and installation of Data Provisioning Agents.

— Enable Data Provisioning server on HANA system

— Deploy Data Provisioning Delivery Unit on HANA system

— Install and configure Data Provisioning Agents on remote datasource host or a standalone host

SAP HANA Data Warehousing Foundation – Data Lifecycle Manageris an XS engine based application and requires separate installation on HANA platform.

SAP Cloud Platform Big Data Service (formerly SAP Altiscale Data Cloud) is a fully managed cloud based Big Data platform which provides pre-installed, pre-configured, production ready Apache Hadoop platform.

— Service includes Hive, HCatalog, Tez and Oozie from Apache Hadoop ecosystem in addition to the core Hadoop MapReduce / Spark and HDFS layers

— Supports and provide runtime environment for Java, Pig, R, Ruby and Python languages

— Supports deployment of non-default third-party applications from the Hadoop ecosystem

— Service can be consumed via SSH and webservices

— Essentially a Platform-as-a-Service (PaaS) offering and comprises :

— Infrastructure provisioning

— Hadoop software stack deployment and configuration

— Operational support and availability monitoring

— Enables customers to focus on business priorities and their analytic / data-science aspects

by delegating technical setup and management of the Hadoop platform software stack and

the underlying infrastructure to SAP’s cloud platform support team

Final thoughts

Gartner’s market research and surveys assert that Hadoop adoption is steadily growing and also shifting from traditional the monolithic, on-premise deployments to ad hoc or on-demand cloud instances.

Get ready to accomplish big tasks with big data !!

WHERE TO START WITH SAP S/4HANA

August 18th, 2017 by blogadmin No comments »

Have you ever wondered, “Where do I start with SAP S/4HANA?” There are four strategies you can begin immediately that will pay off with a smoother deployment.

These strategies are written with new SAP deployments in mind. For organizations already running SAP ERP and converting it to SAP S/4HANA, the strategies would be a bit different.

Prepare Your Business Users

Getting business users involved is as important as any technical aspect of the project. This is because SAP S/4HANA is not merely ERP running in-memory. SAP S/4HANA uses a simpler data model to transform business processes. For example, there is no more data reconciliation between finance and controlling in the financial period-end close, ending the most tedious and error-prone part of the entire process. This is a major productivity win for finance, of course, but it is still a change and one they need to know up front.

Financial close improvements are just one example. Business Value Adviser can help you understand the many other process improvements. Also, most successful SAP S/4HANA projects begin with a prototype, often running inexpensively in the cloud on a trial system.

Prepare Your Data

SAP is ready with a complete set of data migration tools including templates, data mapping, and data cleansing capability. You can start investigating the data mapping right away. Since SAP S/4HANA is built on a simpler data model and has fewer tables, getting data into SAP S/4HANA is easier than with other ERP systems.

You should also decide how much historical data you want to include. You can reduce cost by using data aging so that only the most useful data is stored in memory while the rest is available on disk-based systems.

Organize the Deployment Team

Organizations new to SAP have nothing to decide when it comes to the deployment path. You set up a new SAP S/4HANA deployment and migrate data from the legacy system. Organizations already running SAP ERP have more to do at this point, especially if converting their system to SAP S/4HANA.

Instead, focus on the deployment team, perhaps bringing SAP experts on board through hiring or teaming-up with an SAP partner. The most successful deployments do initial workshops for functional planning, setup prototype and test systems, and start getting end user feedback early on.

The deployment team should also familiarize themselves with SAP Activate, for the latest best practices and guided configuration.

Determine the Deployment Destination

The move to SAP S/4HANA is an ideal time to bring ERP into your cloud strategy. Since it is likely that an organization new to SAP does not have SAP HANA expertise, this makes SAP S/4HANA a prime candidate to run in the cloud.

Perhaps a more accurate term, though, would be clouds. You have a complete choice of deployment with SAP S/4HANA, including public cloud, IaaS (Amazon, Azure), and private cloud with SAP or partners. On premise is an option as well, of course.

Other ERP products are completely different from one deployment option to the next, and many don’t even have an on premise or private cloud option. Whether the destination is on premise or clouds, SAP S/4HANA uses the same code line, data model, and user experience, so you get the consistency essential to hybrid or growing environments. This means that instead of supporting disparate products, IT spends more time on business processes improvement.

Source: All the above opinions are personal perspective on the basis of information provided by SAP on SAP S/HANA

http://news.sap.com/where-to-start-with-sap-s4hana/