M&A, Business Models and Ecosystems in the Software Industry

Karl´s blog

Sollen Roboter Beiträge zur Rentenversicherung bezahlen?

Roboter

Unter Robotern verstehe ich physische Roboter und Software-Roboter, die Entscheidungsaufgaben übernehmen. Software-Roboter basieren häufig auf predictive analytics oder machine learning oder einer Kombination daraus. Mit Hilfe von data augmentation sind die dem Menschen bei der Entscheidung möglicherweise überlegen, da Ihnen mehr Daten zur Verfügung stehen und diese auch schneller verarbeiten können.

Während Sherry Turkle sich mit der Frage beschäftigt, ob ihre Patienten Roboter heiraten sollten, beschäftigt mich die Frage, ob Roboter in die Rentenversicherung einbezahlen sollten.

Warum sollten Roboter in die Rentenversicherung einbezahlen?

Durch Machine Learning nimmt die Automatisierbarkeit von betrieblichen Aufgaben dramatisch zu. Roboter werden stetig zunehmend Aufgaben von Menschen übernehmen. In allen Branchen, unabhängig vom Ausbildungsgrad und ja, auch in Berufen von Akademikern, wie z.B. Ärzten und Anwälten.

In Kürze führt das aus meiner Sicht zu drei Effekten:

  1. zu einer Erhöhung der Automatisierungsgrade aller Aufgaben in Unternehmen. Ich vermute, dass langfristig die Anzahl der automatisierten Aufgaben extrem zunimmt und der Bedarf an Arbeitskräften extrem abnimmt. Das betrifft insbesondere Aufgaben, deren Aufgabenträger Entscheidungen treffen sollen.

  2. Die intellektuellen Anforderungen an die verbleibenden Arbeitnehmer werden stark zunehmen. Diese Arbeitnehmer werden die Aufgaben teilweise oder vollständig übernehmen, die nicht durch Roboter und auf machine-learning basierenden Software-Roboter erledigt werden können.

  3. Die auf machine-learning basierenden Software-Roboter lernen anhand von Beobachtung der in 2. genannten Arbeitnehmer, lernen daraus und werden auch diese Aufgabenträger vollständig ersetzen können.

Schlussfolgerung

Die Beiträge zur Rentenversicherung werden von Arbeitenden getragen. Dazu zählen heute auch Roboter in Form von physischen Robotern und Software-Robotern. Deswegen lasst uns dafür plädieren, dass sie auch in die Rentenversicherung einbezahlen.

Ich freue mich über Kommentare. #robotsaresavingus

Merger integration success based on best practices

Merger integration success based on best practices

With all the mergers and acquisitions activity going on in the markets, it is paramount to perfectly manage the planned integration of targets into the acquiring company.

The integration strategy and the integration approach is different for each merger and each merger has different synergy objectives.

This page is meant to shed light on recent state of the art knowledge and business practices for post merger integration. It tries to structure the problem and thus to provide a way to find the best approach for post merger integration.

When to start with merger integration related tasks

We introduce merger integration due diligence as a new type of due diligence that arises from the objective “Maximize likelihood of integration success”. See the separate page for this topic.

The task of post merger integration

An important ingredient in acquisition strategy is how you integrate the acquired company. Let us describe the task of post merger integration with goals and objectives. You have to think well about goals and objectives, since these will define what is being done through merger integration.

The goal of post merger integration is to plan and execute the integration of two businesses. WIthin each business, there is an organization and there are many processes, which are to be aligned and/or integrated.

Objectives of the merger integration task are:

  • Maximize likelihood of integration success: each merger integration tries to reach successful completion meaning that there is no failure of the integration.

  • Continue target operations: in most cases, it is important to not interrupt the target operations with merger integration activities.

  • Fit integration type: there are different ways to integrate two companies, which are determined in the integration strategy. more information about merger integration types can be found here: Merger Integration Types

  • Fulfill synergy objectives: every merger has synergy expectations and objectives. Merger integration is targeted at creating such synergies.

Decomposition of the merger integration task

There are three subtasks: designing the new entity, planning merger integration (project) and executing merger integration project.
The first two should be started during due diligence to ensure merger integration success.


MergerIntegrationTaskDecomposition.png

The four Merger Integration Types

In the high level model below, you end up with four generic types of post merger integration:

  1. Preservation: The target company is preserved meaning that you leave the target company autonomous. Nevertheless, integration of financial reporting and financial processes might make sense.

  2. Holding: The acquiring company just keeps the ownership of the target company, but does not integrate the target company.

  3. Symbiosis: In this merger type, you decide where integration is needed to reach the objectives of the merger integration.

  4. Absorption: the acquiring company fully absorbs the target company. All organizations and processes of the target company are to be fully integrated into the acquiring company.

integrationtype.png

Stay tuned, listen in on twitter @karl_popp and connect with me on Linkedin for more best practices.


Open Source business models

Open Source business models

Open source business models are commercial business models based on open source software. This webpage contains a short version of a chapter in the book Advances in software business.

Commercial use of open source

For a commercial company, Open Source Software is software that is licensed to that company under an open source license. The commercial company may make use of the open source, like usage or redistribution of the open source free of charge, but it also has to fulfill the obligations, like delivering a copy of the license text with the software.
So the rights and obligations have to be analyzed diligently to make sure there is no violation of the license terms.

Suppliers of open source software

Open Source software can be supplied by a community or by a commercial company. We speak of community open source and commercial open source respectively. For community open source, a community of people provides creation, maintenance and support for an open source software. In most of the cases the community provides these services free of charge.

There are, of course, differences between a company and the open source community. These differences are important to understand, because they influence a customer´s supplier decision and they also create niches for companies to establish a business in that niche.

Commercial open source vs. community open source

So a customer might decide for commercial open source if he needs customized license terms, runs open source in a mission-critical environment and thus needs service level agreements in support or if he needs maintenance provided in a different way than via the open source community.

In many business contexts it makes also sense to have liability and warranty provisions from a supplier when using open source. In most of the existing open source licenses there is exclusion of any warranty or liability (3). This is another reason why companies might choose commercial open source over community open source. Please find more information in the book “Best practices for commercial use of open source software”.

Classification of open source business models

Based on a classification of business models (Weill et al.) we will have a look at open source business models.

Open source usually is free of charge, but that does not necessarily mean there is no compensation for using the open source component.
The next figure shows a classification of generic business models. The business models relevant for commercial open source business are marked in bold. In this general classification of business models, software classifies as an intangible product, see the corresponding column “Intangible”. Software can be created or written (“Inventor”), distributed (“IP Distributor”) or licensed or rented to customers (“IP Lessor”). In addition, the customer needs services to run and maintain the software, like implementation, support and maintenance services. These classify as “Contractor” business. We assume here that all open source businesses make use of at least a subset of these four business models.

No matter if it is a community or a commercial software vendor, one or many of these business models are applied. By choosing a specific selection of business models, so-called hybrid business models are created. Creating hybrid business models means combining different business models with their specific goals, requirements and cost structures.

Since these business models are models on a type level, there might be different implementations of how a certain business models are run. An open source community might run the Inventor business for creating software in a different way (leveraging the community) than a commercial software vendor (leveraging a development team), from a process as well as from a resource perspective. But on a type level, both run the same type of business called Inventor.

So going forward, we will analyze commercial and community open source business models as a selection of a subset of the business models identified here: Inventor, IP Lessor, IP distributor and Contractor.

Community open source business model

The open source community business model usually makes use of the following business models: Inventor, IP Lessor and Contractor.

For the community, the Inventor business is what the community is most involved in. It is about creating open source software and engaging with the community members to coordinate the work and collect the contributions of the community members.

The IP Lessor business is also important for the community. The IP lessor business defines the terms and conditions of the open source license and makes the software available to customers. The license is defined by the community and all customers using the software have to comply with it. In some cases, there are multiple different licenses for an open source software that a customer can choose from.

The Contractor business contains all human services to customers. The community typically provides these via email and they contain services like maintenance, support, translation for country specific versions and the like. They are all carried out by community members. In almost every case, the customer does not pay for these services, but the customer has no rights to enforce any of these services and he does not have service level agreements, like a definition of minimum answer time for support incidents.
The community can serve two types of customers: software vendors and (end) customers. For software vendors, the open source community works as a supplier of software, for the customer, the open source community works as a software vendor licensing software to the customer.
These two relationships differ in the way that customers and software vendors might make use of the software. Customers usually license the software for internal use only. Software vendors license software for internal use and/or for distribution to customers. Often open source software is included in commercial software and provided to customers by the software vendor. In this case, the software vendor has to make sure he complies with all licenses of all open source software he is including in his software product. Please find more information in the book “Best practices for commercial use of open source software”.

Commercial open source business models overview

In the last section we described the community business model, now we turn to the commercial open source business model. Figure 4 shows the typical business models implemented by commercial software vendors. As mentioned before, a commercial software vendor does not have to implement all of these business models, but can rather build unique business models by selecting a subset of available business models. One basic difference to community open source is that the IP Distributor business model is an option for commercial companies.
The history of commercial open source companies shows that in the beginning the companies focused on services around open source software, which matches the Contractor business.

The next step was to build distributions for open source software, like e.g. for Linux. This matches to the IP Distributor business model.

Today, we find all kinds of hybrid business models around open source. Companies are building software and donate it, completely or partially to the open source community (Inventor business model). Commercial software vendors often package or change or extend existing community open source software, so the community acts as a supplier of open source software to the software vendor. In some cases the software vendor does not use existing open source software from a community, but chooses to offer its proprietary software under a dual licensing strategy, e.g. under a commercial and an open source license. Please find more information in the book “Best practices for commercial use of open source software”.

Commercial services for open source

Since open source licenses are free of charge, commercial companies first and foremost focused on providing services around open source software. The expectation was simply that customers would still need services and since the license was free, that customers would have more money to spend on services.

Commercial open source companies provide the following services for open source software: Maintenance, Support, Consulting and Extension or adaption of open source software to a customer´s needs.

Maintenance services consist of the following activities: building future versions, bug fixes and upgrades and providing them to the customers.

Support services contain of accepting, maintaining and resolving incidents that the customer has while using the software.

Consulting services mean planning and executing the installation and go-live of customers´ system landscapes containing the software.

Extension or adaption of open source software based on customer´s requests is designing, programming, testing and delivering open source software that has been modified or expanded. Examples for extensions and modifications are:

  • Functional Extensions for open source applications with country-specific functionality or customer specific functionality;

  • Extending the usage scenarios for open source to additional countries by adding additional translations of user interfaces;

  • Adapting open source software means to make open source software run on customers´ hardware and software platforms.

Summary and outlook

The evolution of open source and commercial open source business is still underway. In the future we will see additional varieties of open source business licenses, such as in open source hardware or designs, and new open source business models, like in open source on demand applications or open source software in cloud environments. Please find more information in the book “Best practices for commercial use of open source software”.

Die Weiterentwicklung der Post Merger Integration

Arbeitskreis PMI des Bundesverbandes M&A

Am 16.1. hatte der Bundesverband M&A seine konstituierende Sitzung. Er geht aus der Gesellschaft für PMI hervor. Der Gastgeber Prof. Feix begrüßte Vertreter von Firmen, darunter Ardex, SAP, vom Bundesverbandes M&A, darunter Herr Prof. Lucks sowie  Vertreter aus  Hochschulen an der Hochschule Augsburg.

In agilen, design-thinking-basierenden Workshops wurden  aktuelle Themen und Probleme erörtert sowie Ziele und  Themen für neu zu bildende Arbeitsgruppen definiert.

 Es wurden Themen diskutiert wie zum Beispiel  Standardisierung und Best-Practices für den M&A-Prozess  sowie Berücksichtigung von Integration Fragestellungen in allen Phasen des Prozesses, Transformation des M&A-Prozesses durch Digitalisierung,  Transformation und Digitalisierung von Unternehmen durch Firmenkäufe, kulturelle Integration, Ökosystem-Integration,  Kooperation mit Hochschulen,   Wissensdokumentation und Wissenstransfer sowie  Veranstaltungen  Des Arbeitskreises. 

Nächste Schritte sind die Aufnahme der Arbeit in den Arbeitsgruppen  sowie die Planung einer Veranstaltung, in der das Wissen des Arbeitskreises an die Öffentlichkeit weitergegeben

wird. 

 

MA_1.jpg
Best practices for commercial use of Open Source

Open Source best practices

Today, all software vendors make use of open source.

  • They strive for excellence in leveraging using open source software in commercial software products while ensuring licensing compliance and governance.

  • They strive for excellence in using open source based business models for commercial success.

  • They strive for excellence in leveraging development models that are used in open source communities in adapting these for in-house use at commercial software vendors.

  • They analyze usage of open source software during due diligence in acquiring software companies.

To reach excellence you have to be equipped with knowledge about best practices for open source. This blog is meant to provide you with the latest knowledge about open source, esp. open source licensing in commercial software, to reach excellence in open source matters. Please find more information in the book “Best practices for commercial use of open source software”.

Open Source  and Open Source Licensing for commercial software

This page shows you why you should carefully consider using open source software in commercial software: Advantages and disadvantages of open source usage, why open source is used in commercial software and how to manage open source licensing and to control open source usage.

Most important is professional management of open source usage by defining an open source policy for your software company and by following structured processes for open source licensing approval and control. Rest assured that attorneys, consultants and tool vendors are there to assist you.

Advantages of Open Source usage

Simple and fast access to open source are often named as key advantages. Low cost and high quality are additional reasons to consider open source. For a software vendor, there might also be a strategic advantage to use open source software to provide the "non-competitive" part of a solution, while the developers care for the competitive part of the solution.

Motivation for open source usage in commercial software

Usually there are numerous open source components used in commercial software. It makes sense to use open source in commercial software if and only if you can comply with the open source license attached to that open source software. If you do so, you can leverage open source to quickly create functionality and to build on trusted functionality that is provided by software vendors or the open source community.

Relevance of Open Source Licensing

Open source components like the International Components for Unicode, ICU,or Hibernate are used in many commercial software solutions. Non-compliance with the license terms can have dramatic consequences. To avoid these open source licensing consequences, a software vendor has to install an open source licensing policy and practice. But what are the negative aspects and side effects of open source licenses? Open source licensing is also a relevant part of due diligence efforts in the software industry as explained in this book:

Potential disadvantages of open source usage

Use of open source in commercial software can show the following disadvantages:

  • Missing commercial services, like support and service level agreements impact the ability to run in commercial environments;

  • Commercialization of software might be blocked;

  • Missing or incomplete license attributes, like e.g. for sublicensing software or running software in an on demand environment;

  • Missing warranty and liability;

  • Non-compliance with license terms might lead to litigations.

Open Source licenses and software supply chains

Usage and licensing rights are transferred between players in the software supply chain. Software passed along the supply chain might contain open source software, too. Due to the copyleft effect in certain licenses, the non-compliance of one supplier might impact all other software companies down the supply chain.
So software vendors should diligently check which open source components are contained in the software supplied to them and which license terms apply.
The use of tools eases the work on this problem. You can use open source scanners to find open source code and the corresponding license terms. Please find more information in the book “Best practices for commercial use of open source software”.

Open Source Software License Due Diligence

Often, commercial software contains open source components. In the due diligence for acquiring a commercial software company, you have to check if the company complies with the licenses for open source software contained in their products (open source due diligence). The following figure shows typical components of commercial software that are analyzed during due diligence. They are coming from service providers, from suppliers for OEM software, freeware and open source software and they are created by employees, too.

Next in due diligence we look at the utilization of open source software. In the following figure the software vendor distributes the software products to resellers and to direct customers. The key fact that triggers open source license compliance is often distribution. With the distribution, the open source license terms apply and have to be complied with. Often open source license terms require that the source code is revealed and/or the software has to be provided free of charge. This is of course a critical issue in the due diligence of commercial software.
Software vendors´ core business is monetization of usage rights granted to customers. Open source software and corresponding licenses have to be diligently analyzed in open source due diligence.

You have to ensure that

  • all current and planned utilizations of open source software are covered and that

  • no open source license terms are violated.

Open Source Software Governance

Open Source Governance is the risk management process for using open source software in commercial software products. So what is the risk in using open source software?

Open source usage has several risks, like:

  • Operational risk: Missing commercial services, like support, might impact the ability to serve customers well in commercial environments;

  • Commercial risk: Monetization of software products might be blocked by open source licenses; Missing warranty and liability terms for software increase the warranty and liability risk for the commercial software vendor; Limitation of business models and delivery models might occur if the open source license does not explicitly allow or even forbid them.

  • License attribute risk: Missing or incomplete license attributes, like e.g. for sublicensing software or running software in a cloud environment; Non-compliance with license terms might lead to litigations.

  • Patent litigation risk: open source software might violate intellectual property rights like patents and this poses a legal risk.

Establishing open source governance

Proactive management of open source usage and open source licensing is paramount for commercial software vendors. From design to shipment of software solutions, open source governance is demanded. Please find more information in the book “Best practices for commercial use of open source software”.

Before you start with open source governance, you have to define your open source policy containing:

  • Strategic topics:

    • Risk level accepted by the management

    • Overall investment in organization, processes and tools for open source compliance

  • Tactical topics:

    • Level of management to approve open source usage

    • Frequence and intensity of governance

    • Software license tracking: Open source scan tool selection

    • Size of open source governance functions

  • Operational topics:

    • List of acceptable open source licenses based on risk level

    • Budget for Open Source Scan Tools

    • A process for governance of used open source components.

We see two types of open source governance: reactive and active. Reactive open source governance just reacts to open source components used in a commercial software and provides an evaluation if an open source use is acceptable or not. As a result, the open source component can be used or has to be removed from the product.

An active approach to open source governance is to provide access to open source componentsfrom within development tools. The development tools allow open source components, that the company allows under the open source policy. Please find more information in the book “Best practices for commercial use of open source software”.

Does somebody miss Klout? new scores of social influence

How do you rate your social influence across multiple social networks?

After Klout has been shut down, the search began. Let me share some of my experiences with Kred and Linkedin Social Selling Index.

There is Kred, my score is 989 out of 1.000. Seems to be calculated based on several social networks. But it is hard to tell how this is calculated.

Then there is the Linkedin Social selling index. Here is my result. This index is a summary of four sub-ratings that rate different ways to engage with the business audience on Linkedin only. Other social networks are not covered.

Capture.PNG

The Linkedin Social selling index also offers a comparison with industry and people in your network, which is great.

Capture.PNG

Any proposals for other indexes to be used? please let me know in the comments. thank you.

Ensuring merger integration success with innovative due diligence

Merger integration success based on innovative due diligence

We introduce merger integration due diligence as a new type of due diligence that arises from the objective “Maximize likelihood of integration success”.

Definition of merger integration due diligence

Merger integration due diligence has the goal to review the merger integration project and plans. 

All aspects of merger integration are being reviewed for viability and for likelihood of success. Viability relates to the work breakdown structure for the integration to be consistent and complete. It also relates to resources (employees and budgets) that have to be sufficient and available. The objective of the task is to maximize the likelihood of merger integration success.

DuediligenceTask.png

Based on the decomposition of the merger integration task we can define the corresponding decomposition of the merger integration due diligence task.

Review of the design of the new entity

The design of the new entity has to be reviewed for consistency and completeness. We start with the business strategy and plan layer and review the defined business strategy for the new entity. Then we enter the second layer and ask questions like: will the business processes work? Are the business processes compliant with compliance rules? Is governance of the business ensured?
In parallel, we have a look at the business resources and at the questions: Are enough qualified resources planned and available? Are the assignments of resources to tasks sufficient? Are sufficient resources planned and available?

Review merger integration plans

Next we review merger integration plans. Keeping in mind the design of the new entity and the resource situation, we review the schedules and the steps of the merger integration plans. We ask questions like: Can the merger integration plan be executed the way it is defined? Will sufficient resources and budgets be available at the right time to execute the merger integration plan successfully? What happens if we run late or we have resource shortages?

Review merger integration project

This is the part of the review that is often neglected in practice. We review the structure and behavior of the merger integration project.
It is important to keep in mind that the word “project” implies that we have a professional management of the integration leveraging professional project managers, experienced with complex projects and equipped with skills of a certified project manager. We should also have a project steering committee in place that has wide competencies and can drive and take decisions quickly.
We also focus on getting answers to questions like: Do we have the right assignments of resources to merger integration tasks? Are the resources capable of executing their assigned tasks? Do the resources have appropriate social competences to lead people and convince them the integration is the right thing to do?

With the results of the merger integration due diligence, you are well prepared to have the right budget, business plan and integration approach.

Events, papers and books in M&A and software business ahead of us

Dear readers,

thank you for your interest in my blog. I wish you a merry christmas and a happy New Year!

What will 2019 bring? More robots in our homes ? We already have two. One vaccum and one mopping robot. More robotic process automation at work combined with Machine Learning ? Sure. More electric cars (Maybe one for me?)? Sending an avatar to work instead of me ? Probably not. We don´t know and that makes life interesting.

Here are some things i will be working on in 2019:

Papers and books

I have the honor to co-edit an issue of IEEE Software:

Michael Cusumano, Slinger Jansen, Karl Michael Popp (eds.), IEEE Software special issue on Managing Software Platforms and Ecosystems, to be published 2019.

I will work on completing the book:

Karl Michael Popp, Successful Post Merger Integration:  State of the art and Innovations in M&A processes, Books on demand, to be published 2019.

We had a great European workshop on software ecosystems at the Platform Economy SUmmit in Berlin and will publish the proceedings asap:

Peter Buxmann, Thomas Aidan Curran, Gerald Eichler, Slinger Jansen, Thomas Kude, Karl Michael Popp (eds.): European workshop on software ecosystems 2018, Books on demand.

With the help of machine learning based translation robots, i might publish another German book, too.

Events

Besides the usual European workshop on software ecosystems and Denkfabrik Wirtschaft, a new workshop will come up, which is a discussion battle between researchers and practitioners in the topic of mergers and acquisitions in London. Xperience Connect will host several thought leadership events on Digitization of M&A and more.

All the best for you and your families in 2019

Always look to the future

Karl

Systematic identification of PMI risks in the due diligence process

[this blog is an excerpt from an interview with me]

 "My experience has shown that there are certain risks that can always be observed in any acquisition."

According to your experience, what merger integration risks are there?

Every takeover of a company    is associated with numerous risks. On the one hand, there may be unpleasant surprises lurking in the target company, but on the other hand, integration itself also holds many dangers. Finally, risks may also be present in the organization and strategy of the acquiring company.

There are many examples of what can happen in a merger. Particularly in the software industry, it is not uncommon for employees to leave the target. The following points are therefore crucial for the success of an integration process:

  • How can I motivate relevant employees to stay?

  • Are there opportunities to document their know-how and make it available to the company in a sustainable manner?

  • Is the target company really in possession of all intellectual property rights?

Project risks in the context of a merger and the resulting integration already arise during the definition of the project scope, the assessment of the necessary resource expenditure as well as during the coordination of its implementation.

How can the risks of merger integration be classified?

The most comprehensive classification is based on the findings of the merger integration expert Dr. Johannes Gerds. My recommendation is that every company should use this as a basis for identifying risks and identify the problems specific to the company. These can be summarized in a risk catalogue and subsequently supplemented by further project-specific risks during the concrete due diligence. This provides an extremely solid basis for the entire risk management process.

What is the best way to identify risks?

In any case, a structured approach is advisable. As a rule, this is based on a company-specific risk catalogue, which is used in every due diligence. But first and foremost, the project and its integration should be examined from a neutral perspective. In the course of a risk workshop, the entire project-specific risks can then be identified and assessed together with all experts and managers involved.

It is always important to adopt and maintain a neutral position. This not only serves the critical questioning of hypotheses regarding adoption and integration, but also a concretization of the entire planning to be carried out. As a rule, this can be done by the finance department and the central units of the organization that are assigned to support acquisitions.

What are the most common risks?

My experience has shown that there are certain risks that can be observed again and again in an acquisition. These are primarily personnel attrition, serious differences in the corporate culture as well as an underestimation of the actual integration effort and the project management requirements in the case of more complex integrations.

Which risks can have the most adverse effects?

This question must always be considered in connection with the size of the buying company and the company to be bought. In the case of smaller acquired companies, the departure of a few key employees can have a major impact on the success of a merger. However, integration often suffers from a lack of experience on the part of the project members involved as well as insufficient resources on the part of the acquired company.

Large companies, on the other hand, often underestimate the complexity and effort required for integration. In addition, the cultural differences between the company buying and the company to be bought also involve a recurring risk potential.

Medium-sized companies tend to show mixed forms of problems with mergers, such as those found in small or large companies. Although the resources are often better and often more experience is available than for smaller companies, there are the risks known from them. But even the acquiring company can create considerable distortions through wrong decisions and negatively influence the success of an integration. Examples of this can be found in surprising strategy changes or sudden changes in the receiving organization in the middle of the integration process.

Once risks have been identified, how should they be dealt with afterwards?

In my view, there are four very typical approaches to dealing with risks: Ignoring and observing or actively initiating countermeasures and sales. Of course, the first approach is the easiest, but also the most dangerous way. Therefore, it is not really recommended, even if the probability of these risks is minimal. Perhaps I should note at this point that we are not talking about probabilities in the statistical sense, but rather about assumptions, i.e. assumptions about the probabilities of occurrence. According to this, even a risk with a low probability of occurrence can occur at any time, precisely because one does not know its probability.

Observation appears to be the most sensible step for risks that are unlikely or can hardly have any consequences for the success of the project. They are identified and regularly checked to see whether their probability of occurrence and thus their influence on the success of the project have changed. Accordingly, active countermeasures can be taken in good time in the event of an expected hazard potential.

But one can already act in advance and take countermeasures if the occurrence of risks is to be avoided for very pragmatic or political reasons. An example of this is the impending departure of relevant employees, which can be prevented at least temporarily by contractual regulations. In this way, time can be gained which is actively used to transfer their relevant knowledge about products or workflows in the company to be purchased to other persons or to document them if necessary.y

 

Why online learning must be part of an end-to-end M&A process management tool

Let us have a look at the situation for mergers in most companies. How are people working esp. in merger integration prepared for sucess?

Which people work in post merger integration?

An acquirer has a large project team for post-merger integration and so does the target. How do you make sure that all members of the integration team have sufficient knowledge to perform best? The answer is that all project members, not only the managers and project managers, need background knowledge on merger integration as well as lessons learned and best practices from other merger integration projects

What information is needed?

You need to expand the experience horizon of all involved managers into the realm of merger integration specific topics and decisions. According to Kahneman, what-you-see-is-all-there-is might be a problem, which means that people only can cope with situations that are within their horizon. So you have to expand it with content about merger integration theory but also about situations and pragmatics of merger integration.

How to make training work

Many mergers are cross-border mergers with many people in many countries involved. So due to geographic diversity, timezones etc. onsite training does not make sense. Go online.

The solution

Therefore, a group of seasoned merger integration managers created an online training called PMI2go that provides that knowledge as well as experiences and lessons learned from over 250 successful merger integration projects. The solution is an on demand, online training with just the right mix of theory and hands-on situations explaining how to successfully integrate companies. Together with SAP, Bertelsmann, Qiagen and Stada we created an online training for merger integration that fits multiple different industries and is targeted to managers acting in a merger integration situation.

The training has content for managers and project members and covers in detail topics like HR integration, Finance integration, Production integration and Research and Development integration. Find more information here: http://mergerintegration.eu/mergerintegrationtraining.html

Modules of the online training

Modules of the online training

A recap of the European workshop on software ecosystems 2018

The workshop was held within two sessions of the second day of the First European Platform Economy Summit in Berlin. The first session was a workshop called “New Ecosystem Opportunities & 'White space' Opportunities in Software and High-Tech“ and the second session was a panel about “Network Effects, Data Effects & AI - Keys to the castle“ moderated by Slinger Jansen. You can find more details on both sessions below.
What made this workshop successful were the discussions about the presentations but also the interactions in breaks and during lunch. A big thank you goes out to all presenters, helpers and participants!

Session one: New Ecosystem Opportunities & 'White space' Opportunities in Software and High-Tech

This design-thinking based workshop featured three short motivating presentations by Peter Buxmann, Sebastien Dupre and Thomas Curran followed by topic-based, hands-on workshops.

Thomas captured the audience by describing his recent success with creating new cloud based ecosystems for digital business in the financial industry. In a traditionally closed industry, what do you do to turn a company into a digital, open platform? Thomas had done just that in a three year project and talked about how to do that successfully.

Peter reported about several studies on the value of data and the importance of privacy. He provided insights into challenges and success factors for software platform providers regarding the value of customer data, customer privacy and tradeoffs between data privacy and data farming by platform providers.

Sebastien showed how Uberization in field service management works by engaging a crowd of service technicians inside and outside of companies. He explained how companies can build an ecosystem connecting field service technicians, partners, own employees and customers to scale their field service operations, increase revenue and provide unmatched customer experience.

Then we split the crowd of thirty people into three teams that worked together and discussed with the help of the moderators and our design thinking coach Olaf Mackert. First, we ran an introduction game called two truths and one lie, which created a lot of laughter and made everybody ready to work together trustfully.

Then everybody dumped his ideas, questions, issues he or she wanted to discuss on post-its, which were clustered into topics by the moderator. Then the teams voted on the topic to start with. The discussions went on in five minute slots. The team voted on either continuing the discussions on the topic or going to the next topic after each slot.

Thomas Curran´s team, which was the largest team, focused on the technical aspects of creating a platform and technology selection. They had lively and productive discussions leveraging the joint wisdom of the team.

Sebastien´s team of ten discussed topics around uberization of any industry and about changes in strategies for field service management.

Peter Buxmann´s team was a diverse team made up of members from venture capital, manufacturing, public administration which made discussions very interesting based on the different views. The team addressed question around motivations of people to share data, ways to create value from data and also around data protection impact on data-driven business models.

The results of each team will be provided in a short writeup from the moderator.

Session TWO: Network Effects, Data Effects & AI - Keys to the castle

John Rethans, head of Digital Transformation Strategy from Apigee/Google, brought everybody on the same page regarding APIs - what they are and what it means to implement an API driven strategy and technology.

Slinger Jansen from Utrecht University opened the panel with a short presentation about his research. The panel´s focus was on pragmatic aspects of creating successful API platforms. It covered questions like “What is the role of APIs for platforms? How do you build API-based platforms?  What are the success factors and pitfalls when building API-based platforms? How to explain their power to non-technical executives and shareholders?”

In addition to Slinger and John, the panel featured the following speakers:

Nik Willetts - President & CEO, TM Forum

Andreas von Oettingen - MD of Factor10

M&A Digitalization: where should data reside?

In past years, there always was a dichotomy: either companies were only on premise, storing their crown jewel data on site, or companies ran certain applications in the cloud. Now, hybrid clouds are on the rise.  This means there are three options now.  

In M&A, data rooms are typically private cloud based storage of highly confidential data during due diligence. Data from other phases are usually stored on site. With all these changes happening and the clear need to manage M&A processes,  where should company store their data about  all phases of the M&A process ?

On premise?

The safest way to store mission critical data is to store them on premise.  locked up.  This is perfect for a the early phases. As soon as more people get involved from inside and outside the company, during due diligence and post merger integration, this approach is not perfect. 

in the cloud? 

Cloud storage makes perfect sense for trustfully giving restricted access to people from different companies. For most companies, this is needed during due diligence and following phases. But many companies also interact with third party companies even before due diligence. 

Requirements for M&A process tools

Customers rule. An end-to-end process tool must respect that. No matter if  customers choose on site, private cloud or public cloud, vendors of end-to-end process tools should give customers a choice. The customer should decide where to store data. 

M&A thought leadership: Gatekeeping and resourcing in merger integration of software companies
This is a transcript of an interview given for a master thesis with me about the integration of a smaller software company..

Question: You mentioned two capabilities, gatekeeping and resourcing . Tell me a little more about these.

KP:  The first one is a gatekeeping.  You acquire a highly innovative software company into a larger  software company and  you want to keep it  as innovative as possible. You want to integrate their offering with the rest of the portfolio. So, the question comes up: Who should build the integration? Which of the many solutions of the larger software vendor should be integrated first?  If you integrate all of them at once, the acquired company is no longer innovative.

 To solve this, you have to oblige gatekeeping.   Prioritize integrations and limit the amount of work to be spent by the acquired company on the integration to allow them to still be innovative.

Question: How did you use resourcing to find people building the integration?

KP: Since the target company had little knowledge about the programming environment of ours, it became pretty clear that the resources had to come from us.

So, we basically came up with a team of, I don’t remember exactly, several dozen people from our organization that actually built the integrations. Again, you have to be very thoughtful of how much of the overall development capacity of the acquired company you want to spend on integration.

Find more thought leadership in my books listed below.

Digitalization of M&A: how the job to be done forces a new generation of tools

What is the job to be done? The job to be done is a concept invented by Clayton Christensen in his book "Competing Against Luck: The Story of Innovation and Customer Choice". It is a new way to look at the needs of customers and why they are "hiring" a product to fulfill their needs. The key concept is to focus on the customer and to avoid the viewpoint of the product. By doing so, you get a wider view what the needs of the customers are, what the customer should hire to help him and who your real competitors are.

How does it influence tool design? As soon as you know the job to be done and the context of the customer, you are able to design a product or service that has maximum value for the customer. As mentioned in an earlier blog, the context of an M&A professional is his office, the work environment on his desk, his smartphone, desk phone and computer. An M&A process platform must respect and enhance this work environment, not add another tool. So let us use this approach to define two requirements for M&A process tools.

From tool to pain reliever: One pain i heard most from fellow M&A professionals is to fill the same data like target valuation data into several different Powerpoint presentations which have different formatting but basically should reflect the same data. So an M&A platform must store the financial data of a business case and generate data into different powerpoint templates. An end-to-end M&A process platform should have a data management component for the financial data of the transaction that can intelligently export parts of the financial model into presentation formats.

From tool to productivity boost: assistive technology helps you to perform better. The pain of the M&A professional is that he has to research market and company data, bring them together and evaluate the opportunities. How can an end-to-end M&A platform help here? Market data feeds are provided automatically for business case creation. The platform offers research as a service data feeds to accomplish that.

Summary

End-to-end platforms supporting M&A processes are the basis for the digital future of M&A processes. The job-to-be-done approach helps to define service of these platforms. These value creating services, which are built on top of this platform, help M&A professionals to get their job done. Stay tuned for more or meet me at Platform Economy Summit in Berlin in November.

Digitalization of M&A processes: How to integrate best of breed solutions into one M&A process platform

We have to move forward quickly to disrupt existing M&A processes and get the best innovations to get to a digital M&A process. So here are my thoughts, some might be drafty, but i want to get my requirements out now to ensure we all are facing the right direction for digital M&A.

Requirement: we need several vendors to provide innovations

Can the best innovation for all phases of M&A come from one vendor only? Probably not. So how do companies get the best functionality in a unified, end-to-end M&A platform? The platform has to be open, has to have OData based APIs to allow integration with the best of breed functionality for the different phases of the M&A process.

Requirement: We need a metamodel of end-to-end M&A processes and objects

Thirty years of object modelling for businesses are paving the way to create a metamodel of M&A processes. This metamodel should contain the objects and relationships to be used in the M&A process like buyer, target, companies, which are contained in longlist, shortlist, have relationships with data rooms, documents like contracts, patents, financial data etc. etc. In addition we need

Requirement: Standardization is needed

Establishing a standard metamodel for end-to-end M&A processes is key to success. There are three ways to establish it: via the market or via standardization committees or by creating a winner takes it all market for the end-to-end M&A process platform. it will be interesting to see which vendor chooses which approach.

Requirement: An ecosystem of extensions of the end-to-end M&A process platform

Based on the standardization and the OData-based metamodel, M&A process platform vendors can start to foster an ecosystem of innovations for the M&A process. Today, we would need e.g. the following ecosystem of vendors to engage: end-to-end M&A process platform, data room vendor, company information providers, contract analysis providers, machine learning application providers etc.

Summary

With the listed requirements in place, we can move forward quickly to leverage innovations from different vendors. From my point of view, establishing a winner in the end-to-end M&A process platform market is paramount to provide massive innovation to many companies. Several large corporates in Germany are considering to choose such an M&A process platform today to streamline their operations. I will keep you posted if there is one vendor that wins the market or if there are several vendors fighting for larger marketshares at customers.

Like my way of thinking? So feel free to read my book about M&A: M&A due diligence in the software industry. Do also feel free to comment, happy to receive the feedback.

Program of the European workshop on software ecosystems as part of the Platform Economy Summit

The European Workshop on software ecosystems will be held as part of the Platform Economy Summit in Berlin, we will have two sessions on the second day of the European Platform Economy summit.

November 21st

11:15am Challenges and success factors for creating digital platforms

14:30 Network Effects & APIs: Their role in driving platform value

The first session is called “Challenges and success factors for creating digital platforms”, moderated by me: Insights from studies, real life projects and Uberization“ and will feature three short motivating presentations by Peter Buxmann, Thomas Curran and Sebastien Dupre followed by topic-bases workshops.

Peter Buxmann, Head of Software & Digital Business Group at Technical University of Darmstadt, will present the topic “Data Economy, Platforms, and Privacy: Insights from multiple empirical studies“. He will provide insights into challenges and success factors for software platform providers regarding the value of customer data, customer privacy and tradeoffs between data privacy and data farming by platform providers.

Thomas Curran will present the transformation of a financial industry heavyweight to becoming an open, digital platform. In a traditionally closed industry, what do you do to turn a company into a digital, open platform. Thomas has done just that in a three year project and will talk about how to do that successfully.

Sebastien Dupre from Coresystems (now SAP) will present the topic “Uberization of field service: a software platform for crowdsourcing service technicians and show how companies can build an ecosystem connecting field service technicians, partners, own employees and customers to scale their field service operations, increase revenue and provide unmatched customer experience.

The second session in the afternoon is called “Network Effects & APIs: Their role in driving platform value “ and will be moderated by Slinger Jansen - Software Ecosystems Research Lab, Utrecht University. It will focus on questions like “What is the role of APIs for platforms? How do you build API-based platforms?  What are the success factors and pitfalls when building API-based platforms? How to explain their power to non-technical executives and shareholders?”

The session will start with a short introduction about APIs in general by John Nethans from Google. Then Slinger will present the essence of latest research on API approaches. After that, the panel will focus on pragmatic aspects of creating successful API platforms. After a short while, the panel will open up and take questions from the audience.

This session will feature the following speakers:

Slinger Jansen - Software Ecosystems Research Lab, Utrecht University

John Rethans - Head of Digital Transformation Strategy, Apigee, Google

Nik Willetts - President & CEO, TM Forum

Andreas von Oettingen - CTO Factor10

This session will start with short statements from the panel and will transition to a discussion with questions from the audience.

hope to see you there. please make use of discounted tickets as of below.

Dr. Karl Popp

Join now and you get a special 15% discount off the booking fee. Just quote the discount VIP Code: FKN2652EWOSEL to claim your discount.
 
For more information or to register for the Platform Economy Summit Europe, please contact the KNect365 team on: Tel: +44 (0) 20 3377 3279 | Email: gf-registrations@knect365.com | Register here.
 
Remember to quote the VIP code: FKN2652EWOSEL to claim your 15% discount.

How machine learning can help in digitalization of M&A processes!

Machine learning is everywhere - except in M&A processes. Let´s change that. Let us imagine the impact of machine learning in different steps of a typical M&A process. Let us start by sharing some of my ideas to trigger your imagination. I am convinced that the technologies needed to achieve this vision are in place today, they are just not being used in this context.

Early phases of the M&A process, shortlisting phases

Let´s say you have five companies in your shortlist. Machine learning can help finding and selecting potential targets e.g. by predicting which of the companies considered will be the unicorn, i.e. the most successful company in the list. Approaches for doing that exist, e.g. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3159123

Preparing the Letter of Intent

Based on past projects, machine learning can help to predict deal breakers, find missing or potentially wrong data in the financial valuation of the target and propose deal structure and clauses for the letter of intent based on the existing, available data about the target and the acquirer.

Due diligence

A vital part of the job to be done in due diligence is that you are looking for missing data, for deal breakers and risks in documents in the data room BUT you only have limited time and a huge data lake in the data room. So let us see how automation and machine learning could help us here.

Day one of due diligence: the data room is available. Day 2 of due diligence: Information about missing data, deal breakers and risks is already available.

How is that possible? Using automated document/contract analysis based on machine learning as well as data about deal breakers and historic projects, a machine learning application can provide this information. There is a huge value in this: you get more time in due diligence to work on missing data, for deal breakers and risks, so quality of due diligence results will massively increase.

No more reporting: During due diligence, digital assistants will automatically keep the lists of tasks, risks, issues and results, will create automatic reporting from that and propose next steps.

Merger integration

Results from the due diligence are automatically distributed digitally to all integration team members. Machine learning based digital assistants propose the integration plan, the integration timeline and which next steps should be taken. They analyze due diligence data and propose the set of data that should be doublechecked and validated. They validate that data by extracting information from the target´s ERP systems automatically and present deviations in digital dashboards and propose next steps.

Learning assistants analyze the learning needed by the involved integration managers based on their CV and proposes digital learning lessons based on PMI2GO.

No more reporting: During due diligence, digital assistants will automatically keep the lists of tasks, risks, issues and results, will create automatic reporting from that and propose next steps.

Let us imagine the impossible - and make it work

The opportunities are massive but are not yet leveraged. I think the M&A community has to provide guidance to vendors to achieve a vision i call the Digital M&A Manifesto. Stay tuned for more details. Like this article to get more inspiration!

Digitalization of M&A: robots are boosting M&A process performance

While we are used to physical robots vacuuming our homes, software robots are not in widespread use yet. The term used for software robots is robotic process automation. (RPA)

What is RPA? 

RPA is defined as tools to build automation for everyday tasks and processes  on a computer screen using Software Robots.   This can start with a simple sequence of clicks on the screen that you can replay automatically. But RPA can also cover more complex workflows with decision points. RPA  tools usually contain a recorder that tracks  certain work sequences on your computer screen and can replay it this sequence later.

What is RPA combined with machine learning? 

Recording workflows with current RPA tools is a manual process. If combined with machine learning, a digital assistant will track your online work and will propose automation of routine processes you do every day. This will lead to a step by step increase of the level of automation in processes.

How does RPA help in M&A processes?

 It frees up time to focus on the really important topics instead of routine tasks  and sequences of clicks on a computer screen.

Digitalization of M&A: See what is possible today in just one afternoon

Corporate M&A teams don´t have the time and bandwidth to research and follow up with a number of vendors and service providers to get an overview of the latest and greatest innovations for M&A processes.

To solve this issue within one afternoon, Xperience Connect organized an event at Frankfurt School of Finance last week providing several pitches of innovative products and services for next generation M&A processes.

So twenty-two corporates met to have a look at ten vendors, 15 minute pitches by the vendors helped getting an overview within an afternoon, followed by a joint dinner to discuss.

Here are my four highlights of the afternoon:

Target screening

  • an interesting presentation from a researcher how to reduce the number of potential targets based on acquisition goals, they also use an augmented set of company data. This is a startup in stealth mode but they presented anyway…

Automatic contract analysis

  • RR Donnelley, a vendor of data room called Venue, showed their product eBrevia, which is a tool to automatically analyze contracts in many different languages based on machine learning.

  • eBrevia contains about 150 provisions it is able to find and analyze, customers can build AND share new provisions with other customers if they like to.

  • eBrevia can be used with Venue, but also with other data rooms.

Digital valuation

Smart M&A

  • Midaxo did a very interesting presentation of their innovative, cloud-based, end-to-end M&A process platform.

  • With this platform, all parties collaborate seamlessly following repeatable, systematic processes based on their specific, corporate playbooks.

  • Several large corporates, including Daimler and Philipsh have adopted this solution.

Thank you, Stefan Gerhard Schneider for organizing this event. He offered to have follow-up meetings with deep dives, which was well received by the corporates.

If you like this content, please also have a look at www.digitalmergers.com