Today we face unequaled challenges regarding how decision-makers deploy capital and successfully chart a path for growth in an ever-changing market. Substantial investment in AI has generated significant headway in applications to certain business processes but the promises of AI have fallen short when it comes to planning, strategic thinking, market analysis, and other factors in our most critical decisions.
Myopic focus on statistical learning with its heavy reliance on historical data has limited the vision for what AI can do in catalyzing optimal investment in our most promising opportunities. In this article, I present a platform using collective human intelligence and AI developed over a number of years that promises to empower collective creative prediction and action.
The limits of second generation AI, statistical learning AI are clearly addressed in Judea Pearl’s recent book, “The Book of Why”. He states:
“If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do.”
He goes on to explain:
“While probabilities encode our beliefs about a static world, causality tells us whether and how probabilities change when the world changes, be it by intervention or by act of imagination.”
In a MIT Technology Review interview, one of the fathers of deep learning, Yoshua Bengio, stated:
“I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information.”
There are strong arguments for the integration of human and machine intelligence being fundamental to next generation AI systems. Humans remain superior in learning from extremely small data sets and in imaginative discovery and predictions. Trust and transparency are critical to AI applications in areas such as financial investments and corporate transformation decisions. Human-empowered AI and collective intelligence suggest ways to guide the course of AI’s future.
If you want to get input on a decision using tool sets available today, you typically turn to survey tools, polls, or open-ended input via emails and messaging tools. Today’s tools are geared to collecting individual perspectives on a topic or decision. For example, let’s say you have a group of people from whom you would like feedback on an investment decision and you just want to get input on three of the questions we posited above — business, team, network effects. Each participant is asked to score the questions from 1 to 10 (10 being the highest) and to give their reasons or thinking about their answer. The figure shows graphically a model for that information collection exercise:
The blue dots represent the people inputting their views (23 participants). The pink dots represent their scores and reasons (113 reasons for scores submitted). Some elaborate their reasons more than others but they all provide scores for each question. This graph shows results obtained when you use any of the methods mentioned above: survey, poll or email. Calculating the mean scores for a group is easy and survey tools do that for you automatically. Summarizing the 113 reasons or thinking is hard unless you use the human brain and read each one. Even if a single human did read each one, they could not maintain them all in working memory. At most, they might pick out five or six that resonate. Even then, unless they bring the group of evaluators together, they would never know the top three or four reasons that represent the thinking of the group about the investment decision. Learning prioritized relevance is a hard problem. Sentiment analysis is only mildly predictive. What is severely missing is the ability to ask:
How is the group aligned around the reasoning for this investment decision?
Suppose we introduce a mechanism that allows them to rank each other’s reasons from a sample as mentioned above. Using a relevance learning algorithm, we can filter out the reasons that have lower relevance to the group to radically simplify the analysis process. Note we are seeing the early stages of how this group reasons together. This is a first step in collective reasoning, filtering relevance by a AI augmented peer review process:
Note that the person on the lower left has many reasons that aren’t particularly relevant to her/his peers. (Have you ever felt like you are that person in a meeting?) If we only look at high relevance reasons (e.g. scores >50%), we have to reduce the complexity of the problem a bit. Note we now have to read ~50 reasons if we want a deeper dive into what the group thinks.
Using some of the latest NLP technologies, as we are dynamically learning the relevance of reasons. We can also learn topics (in green) of critical importance to the decision. Topics like: good market or poor go-to-market make it easier to summarize the group’s reasoning. Note that since the topics or themes are just collections of reasons, they will have a theme relevance score, so it is now possible to put the themes the groups are thinking about in priority order based on the group’s collective judgment. In this particular case, it turns out that there are five key themes with positive, negative and neutral sentiments associated with each theme further simplifying the results. Since the reasons are attached to a quantitative score, we can combine that score with sentiment analysis to get a much more predictive and precise read on the true intention of the person’s thinking on the decision.
However, recall that the initial purpose of this exercise was to do what the group recommends with regard to this investment decision. By using a structure process like the one depicted in Figure 1, we can link all the collective reasoning into a prediction. The themes shown in green and their included relevant reasons shown in pink ultimately influence ratings on the features, thus influencing a predictive score. In this case, the group gives this investment a 79% score. Such scoring methods can be trained using ground truth data, integrating human and collective intelligence, to provide a framework for a whole new approach to collective decision making by organizations.
Each decision process produces a collective reasoning model of a decision: a collective cognitive model of what the group believes will be the outcome of a decision, such as a decision to invest or pass. The model allows inspection from different perspectives. For example, we can start with the predictive outcome and ask “Why”. What is the reason for the decision? The figure below shows a perspective on the business quality aspect of the decision.
The feature relating to quality of the business received a mean score of 8.
However, as you can see, there is diversity in opinion. Some thought there was a good business model while others thought not. Collective reasoning allows exploring diversity of opinion; a critically important point to be made here. Collective reasoning is not about “herd mentality,” but explores the ramifications of diverse opinions reasoned by diverse individuals and how it blends into a collective prediction or decision.
Collective reasoning leverages findings from collective intelligence, specifically the diversity prediction theorem:
Collective group error = group error — diversity score
For the collective reasoning system described here we calculate both the quantitative diversity score and the language diversity score. The latter is made possible by turning language into geometry using the latest in embedding techniques. Each reason is a point in language space that has a calculable distance from other reasons. We have shown that:
the greater the cognitive diversity of the team the greater the predictive accuracy
By linking reasons (causes) to a score, we have a causal relationship between the reasoning and the score. The net effect is that we now have a causal network for a prediction or a causal cognitive model for a decision. Over time this can be generalized so that we have a foundation for building computational models of the collective reasoning and intelligence of teams of individuals across broad decision domains.
In summary, the collective reasoning system described has the potential to radically transform the way to capture and improve the intelligence of decision makers in an organization.
For those with an interest in AI you will recognize the result as a Bayesian Belief Network or causal network. In effect, we have automated the process of acquiring knowledge from a group of experts or team members in an organization.
I have presented in this short piece a solution path to creating faster, accurate, organizational decisions. Each collaborative process is driven by a decision “scorecard” that captures the factors you need to consider to make a decision. Each process results in a score and a complete record of the knowledge and thinking going into the decision. The model can be archived and used as an independent resource for organizational learning. The model simulates the thinking of the group and as such is a causal model, a knowledge model, a mini intelligence of the expertise and thinking that resulted in the decision.
If you have a “scorecard” of the factors you need to consider to make your decision, consider using collective reasoning as described above to automate and streamline your decision-making process. It can be done asynchronously, remotely, and you don’t have to go to a meeting. You can meet once you are aligned, it will be a much more pleasant experience. You will have a permanent record of the complete reasoning process your team has gone through to make decisions. You can meet once you are aligned, to reflect on the information generated by the process and make a final decision. Remember, this is a decision support process, other factors, such as an influx or reduction in the availability of funds, a sudden shift in the macro economy, or even a pandemic might be germane to a decision but not have been included in the analysis.
Finally, and perhaps most important in the long run, the process that I have described produces a permanent record of the reasoning processes of both individuals and the organization as a whole. That permanent record can be evaluated under the bright light of reality as it unfolds. Did the organization consider the proper reasons? Did it weight them properly? Linking process to outcomes facilitates learning, as analysts can evaluate past reasoning in light of how the future unfolded.