The problem of inductive business hypothesis generation

Bullshit

It may surprise some, but my interest in cognitive sciences was initiated by the bullshiters I frequently encountered in seminars, meetings and other events related to my business sector (entrepreneurship, creativity and computer science). I was bombarded with countless theories and concepts, and more importantly organisational ones, each one more ingenious than the other and which seemed able to solve anything. All these theories were of course backed up by so called “scientific” articles. Despite this evidence seen as irrefutable by their authors, I was wary of this “silver bullet” that was regularly marketed to me, sometimes advertised simply as such. In fact, some of these poorly argued theories sparked my interest more than others. On the one hand, some of the proposed arguments annoyed me by being inappropriate, and on the other, I felt that some theories were more credible than others, but poorly addressed or interpreted. Of course, at the time, I was not aware that my own reasoning based on my intuition was not appropriate, but I still felt that there was something to be done in this area, without really knowing what. A common trait in all these “stories” was that the “evangelists” (a regularly used term) of these methods seldom seemed to really master their subject, in my opinion. All that seemed important to them was to sell their ideas at any cost, including bulling, i.e. not actually lying, but using enumerations in keeping with the theory, to better convince their audience (Perry, 1963). Using reference philosophical texts on the epistemology of cognitive sciences, I will try to make the connection with the methods used in the Lean Startup, still very popular today, and try to explain the risks of such an approach.

Already in 1904, the geologist Thomas C. Chamberlin approached the topic of methodology in scientific research, in his article “The Methods of the Earth-sciences” (Chamberlin, 1904). According to him, the best method to verify a hypothesis is the one that is most precise, but also exhaustive and unbiased. To avoid this last risk, the researcher (or entrepreneur looking for a business model) must make his or her observations without any preconceptions. Without development, this approach is problematic as it prevents the researcher from detailing an observation to obtain relevant data. As Karl Popper (1969) demonstrated to his students, the simple instruction “observe” naturally leads to the following questioning: “What should be observed?” The choice of our subject of interest therefore arises from a starting hypothesis which arose from one or more observations. Moreover, a lot of our knowledge is built using the hypothetico-deductive model, as described by Bacon (1268). This procedure consists of first formulating a hypothesis, deducting its consequences and then testing them using experiments. Chamberlin (1904) proposed in opposition to “The Method of Colourless Observation” which has just been described, two other methods called “The Method of the Ruling Theory” and “The Method of the Working Hypothesis“. The first consists of carrying out as many experiments as needed to verify a first hypothesis. The second is to consider a hypothesis as tentative until observations contradict it. This is the method recommended in the business startup stimulation programs (including those in which I actively participate).

According to Chamberlin, this latter method can often degenerate into the first one if the experimenter is not vigilant. In both cases, a theory can be compared to the researcher’s child. It is easy to imagine how some people persist, like a magistrate who examines exculpatory and not always incriminatory evidence, a prisoner of his own beliefs and convictions, and only retains information that confirms his hypothesis, neglecting everything else. Because elaborating a theory based on a series of empirical observations, however numerous, confirming the underlying hypothesis, is not without a whole series of difficulties, including the problem of induction (Popper, 1969). Induction consists of a mental operation through which we go from given observations to a proposal that explains it. This method is therefore probabilistic since it is based on the principle that the greater the number of observations, the greater the likelihood that the general theory deduced from the latter be true.

However, according to Karl Popper (1969), no general theory can be justified using this method since most of the time it’s impossible to observe all instances of a type of observable facts since induction can only be justified by induction, which causes an infinite regress. Popper did not hesitate to characterize the generation of hypothesis by induction a myth in the development of objective scientific knowledge. His first argument concerns the selective nature of the observation. To observe, attention must be focused on a point of interest. This point of interest originates from a hypothesis that has been constructed from one or more observations. Which makes it a recursive process. This tendency to look for regularities in the world around us is a very powerful learning mechanism on which conditional reasoning is based. It seems natural, but can pose a whole new series of problems…

Confirmation Bias

This method concerns multiple cognitive biases, of which the best known is the “confirmation bias“, the powerful effects of which have been demonstrated by Wason & Johnson-Laird (1972). Wason’s original experience (1960) suggested that participants discover a rule from the sequence of starting digits “2-4-6”. Participants could propose sequences of figures to test their hypotheses. They were then told whether this suite was compatible with the rule to be discovered.  After making several proposals, they were then asked to formulate a general rule, of which they were absolutely certain. A positive or negative feedback followed. This task uses our ability to generate hypotheses but also the ability to give up. According to the author, these two processes would interact. The experiment was created to highlight the method most commonly used by the participants to find the rule. It also highlights the fact that most of the participants are prepared to formulate a rule based solely on proposals that correspond to their initial hypothesis. Most subjects proposed the sequence of numbers “4 – 6 – 8” (valid), then “6 – 8 – 10” (also valid) to prematurely announce the rule “two is added to each next digit“. In fact, the rule is “figures in ascending order” and only barely 21% of the participants found it without having an erroneous solution. More surprisingly, more than 51% of the proposals, with an erroneous solution, remained consistent with the latter. Most people, even if very intelligent, seem incapable of considering that there may be a different or more basic rule than the one that seems to work (Miller, 1967). Where most participants used the strategy to verify their hypothesis, the others tended to invalidate it using proposals such as “2 – 4 – 10” (valid) or “10 – 6 – 4” (invalid) or “1 – 7 – 13” (valid). As Popper (1969) stated, there is no “valid” inductive method since it is impossible to guarantee that a generalization from observations confirming a hypothesis, be correct.

A second major problem appears with the “always correct” theories that nothing seems able to dispute. These theories are often referred to as pseudoscience, due to this characteristic and one of the most commonly used examples to illustrate it in scientific psychology is psychoanalysis. Thanks to concepts such as ambivalence, resistance, repression or denial, it is always possible to interpret a behaviour with hindsight. A set of behaviours can therefore go in one direction or in the opposite direction without ever questioning the theory. It is these peculiarities that make psychoanalytic theories difficult to experimentally test, giving them longevity (in the case of French speaking countries) and the high progressiveness in their decline contrary to scientific theories that disappear abruptly when they are proven to be false backed up by new data (Meehl, 1978). For Popper (1969), a theory is only scientific if it is falsifiable. This falsifiability would be the demarcation criterion between a scientific theory and one that would not be. The latter formalised, at the end of the 1920s, his conclusions as follows (slightly adapted by myself):

  1. It is easy to obtain confirmations, or verifications for almost every theory, if confirmation is being looked for.
  2. Confirmations should only be considered if they are the result of risky predictions; that is to say, we should have expected an event which was incompatible with the theory and that would have refuted it.
  3. Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
  4. A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think), but a vice.
  5. Every genuine test of a theory is an attempt to falsify or refute it. Testability is falsifiability; but there are degrees of testability; some theories are more testable, more prone to refutation, than others; they take as it were, greater risks.
  6. Evidence confirming the theory should not count except when it is the result of a genuine test attempt; which means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I speak in such cases of “corroborating evidence“.)
  7. Some genuine testable theories, when found to be false, are always upheld by their admirers, for example by introducing ad hoc an auxiliary hypothesis or by re-interpreting the theory in such a way that it escapes the refutation. Such a procedure is always possible, but it rescues the theory from being refuted at the cost of destroying, or at least lowering its scientific status. (I later describe such a rescue operation as a “conventionalist twist“).

Consequently, to increase the quality of a theory, it is necessary, not to increase the number of empirical observations that confirm it, but to actively seek events that make it impossible, such as the alibi of an alleged killer who prevents him from being physically at the scene of the crime at the time it occurred. Researchers (of a business model) must therefore formulate their theory by making clear how it can be refuted, and not the means by which it can be confirmed. Chamberlin (1904) proposed to use the method that he called “The Method of Multiple Working Hypothesis” which implies the elaboration, even before experimentation has begun, of multiple hypotheses, some of which contradict themselves.  So, the researcher must have an open mind and remain open to all possible interpretations of a studied phenomenon, including the possibility that none of the explanations are correct, and that even some new explanations may appear.

This latter effect, more than desirable, is also compatible with the social sciences (in entrepreneurship), which often suggests that behaviour originates from multiple causes that can interact with each other and thus make the repetition and use of statistical tests complicated. This is the major problem in cognitive science. Meehl (1978) does not hesitate to declare that, according to him, the field of research in psychology (clinical, social, etc.) based on correlations and tests of variance is so problematic, that it is scientifically irrelevant. In his original article, Meehl detailed the 20 reasons that led him to declare that Popper’s method (1969) was incompatible with statistical tests such as variance testing and analysis. He declared no more, no less that “The almost universal dependence of the rejection of the null hypothesis is a terrible mistake, is fundamentally unhealthy, is a bad scientific strategy, and is one of the worst things that ever happened in the history of psychology“. To illustrate his point, Meehl states that the variability of psychological processes is so highly dependent on context, that it is impossible to recognise an interesting and relevant meaning outside the experimental environment designed by the researcher. To draw up a list of these parasitic variables is as impossible as quantifying the precise effects on a given dependent variable, not to mention that the samples used are often far too small. He goes even further by questioning meta-analysis aimed at linking studies that study the same psychological construct and to draw a general conclusion on its validity. For Meehl, researchers in psychology who wish to highlight the explanation of a theoretical construct, should prefer consistency testing to significant statistical tests, i.e. use different means and not redundant estimations of a quantitative value for this construct. He concludes by declaring that it is the very nature of certain fields of study in psychology, such as social psychology or differential psychology, which are not compatible with the suggested method and are probably doomed to never have any major theory, scientifically speaking.

Considering everything mentioned previously, it should be noted that in social sciences (and therefore including entrepreneurship), many questions can be satisfied with dichotomous answers such as “Is this one better than the other?“. So, the question is not “what is scientific or not?“, or “what is the best method?“, but “what are the most appropriate tools to achieve this or that objective?” The entrepreneur’s is to use the appropriate tools for their research object and above all to correctly master them to avoid any erroneous interpretation of the results of their experiments, which could, in this precise case, lead to their loss with serious financial consequences. If the significance test was heavily abused in some areas of psychological sciences, to the detriment, for example, of the size of the effect test, is probably due to the way of funding research that drives the most brilliant theorists to publish or perish. The entrepreneur, very often put their own money, or their time, with no certainty of remuneration. Not addressing these issues seriously comes down to the same as putting all your eggs in one basket.

In conclusion, the entrepreneur (or professional researcher) must show intellectual honesty and be as rigorous as their role entails, even if in order to do so, they have to oppose the standards established by the markets (or institutions) in which they evolve. Paul E. Meehl (1989) said, in his famous courses on psychological philosophy the following: “If you get to a point where you say to yourself ‘I will cling to this blasted theory, whatever happens, come hell or high water’ you are no longer respecting the rules of scientific research”. The entrepreneur then enters the vicious circle of the optimistic entrepreneur where hope is fed by the information filtered through confirmation bias. And that’s exactly what you need to avoid at all costs.

References

Bacon, R. (1268). On Experimental Science.

Chamberlin, T. C. (1904). The methods of the earth-sciences. Popular Science Monthly, pp. 66:66-75.

Eccles, J. G. (1992). Under the Spell of the Synapse. In F. Worden, J. Swazey, & G. Adelman, The Neurosciences: Paths of Discovery, I (pp. 159-179). Boston: Birkhäuser Boston.

Meehl, P. E. (1978). Theoretical ricks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progressof soft psychology. Journal of Consulting and Clinical Psychology, pp. 46:806-34.

Meehl, P. E. (1989). Paul E. Meehl Philosophical Psychology Videos. Retrieved from Departement of Psychology of the University of Minnesota: http://www.psych.umn.edu/meehlvideos.php

Miller, G. A. (1967). The Psychology of Communication. New York: Basic Books.

Perry, W. G. (1963). Examsmanship and the Liberal Arts. In Examining in Harvard College: a collection of essays by members of the Harvard faculty (p. Examsmanship and the Liberal Arts.). Cambridge, MA: Harvard University.

Popper, K. (1969). Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge.

Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12(3), 129-140. doi:10.1080/17470216008416717

Wason, P., & Johnson-Laird, P. (1972). Psychology of Reasoning: Structure and Content. Cambridge: Harvard University Press.

 

“Scrum in 10 slides” presentation has been updated

When I created the first version in 2012, I never suspected such a success. After a few years of practice in agile coaching and the resulting deeper understanding of the Scrum framework, I decided to update the slides with the latest changes from the official guide. I also decided to focus on some of the things that are not well understood in the many Scrum implementations I’ve had to observe.

First, Scrum is not always adapted to the environment that wants to implement it. Prior work on mentalities must be done. I specified that the Scrum Master was responsible for leading the entire organization towards that. The role of the Scrum Master is also in my experience, the most misunderstood element in the framework. Its related slide has been updated to allow you to explain what exactly a Scrum Master is doing on a daily basis.

Then, I also noted that important steps in the Scrum process were deliberately forgotten, often for lack of understanding of their usefulness. I added the notion of visibility and transparency of the backlog on the slide of the Product Owner but also the fact that the Definiton of Done could be the result of a reflection at the level of the entire organization. The slide on the Sprint Planning was already very clear about the usefulness of the Sprint Goal, but the importance of the latter is put more in the slide of the Daily Scrum which is undoubtedly one of the most important Scrum event but usually badly conducted by the team.

Finally, I made some minor corrections, for example in the elements related to the Development Team, the practice of Backlog Refinement which should not exceed 10% of the team’s time during a sprint or what is a Product Owner.

This should now allow you to address the most important elements of Scrum fairly quickly. If you have comments about the content or if you notice an error on one slide or the other, do not hesitate to let me know.

Download the latest version here.

The misunderstanding between computer scientists and neuroscientists

WestworldI’ve been very busy lately. First, by finishing my studies of experimental psychology, and then, by all the projects that I had in progress. I still think about the direction this blog should take. Since I have undertaken a doctorate in the field of cognitive neuroscience in which there is an artificial intelligence component, I would like to share with you a reflection I have been making for a few years. I am talking here about research on artificial intelligence called “general” and not machine learning based on statistics. That is to say that commonly understood as being the attempt to copy human intelligence, by creating a unique intelligent agent capable of learning and performing all human tasks as in the excellent TV series “Westworld”.

  1. Computer scientists underestimate the complexity of the human brain. 
  2. Neuroscientists overestimate the capacity of computing.

Although there are some extraordinary people in the field of artificial intelligence research, the domain is currently dominated by the first category. This has the effect of encouraging the creation of failed projects, some financed to the tune of billions of dollars of public money.

My approach was to try to become both, to play a coordinating and mediating role in a project bringing together the two profiles. After 5 years of intensive study at the University of Liège where you can find some of the finest scientists in fields like short term memory, I am now aware that we know almost nothing about the functioning of the human brain. That we just scratch the surface via indirect measurement methods. That psychology is so fragmented (the different fields), that it is difficult for a specialist to grasp the nuances of each of its components. This creates sterile wars between different schools of thought for example.

On the other hand, all computer scientists are aware of the limitations of computing, often more related to basic physics. The uninitiated have a vision of the computer biased by the films and the sensationalism of journalists whose industrialists profit shamelessly.

I would have the opportunity to come back to these points soon by supporting my analysis a little more. I just wanted to keep a written record somewhere to refer to it from time to time. Meanwhile, stay tuned 🙂

Are all software developers introverts?

I realize that it has been nearly a year since I have posted on this blog. I was short of time rather than short of ideas. To make amends, I have come back with something slightly more ambitious than previous publications, the first of many I hope. In this article I will explain to you the results of a small study that attempts to verify a stereotype that was often mentioned to me during my career in IT: that software developers are introverts. This vision of the programmer runs counter to another persistent stereotype you might know: that software developers are attracted to novelty. Indeed, it has been shown that novelty seeking is positively correlated with extraversion and emotional stability (De Fruyt, Van De Wiele, & Van Heeringen 2000). This was confirmed to me by Michel Hansenne, professor of differential psychology at the University of Liège (2014). If these two stereotypes are contradictory, which one is true?

The first clue I found was in the search for novelty: it may be noted that headhunters specializing in the recruitment of software developers have not waited for the results of scientific studies to use this argument to entice new recruits by telling them about new technologies, they address this before even the issue of finance. This aspect of the personality of the average developer seems quite plausible because I have noticed many times that some of them adopt what I call “CV Driven Development” This is a counterproductive practice (for the company) which is to focus almost always new technologies, not for objective technical reasons but to be able to add this new knowledge to their resume or only for the pleasure of experiencing something new.

I then asked if anyone else had asked the question. Although I found no academic scientific study on the issue, I can still quote the Evans Data Corporation report “Developer Marketing Survey 2014“, the results were included and discussed in the popular online specialist magazine InfoQ (Avram, 2013). The report from this company, specializing in the analysis of the target population for this study attempts to provide an answer to this hypothesis through a questionnaire sent annually to its panel of over 75,000 developers in 85 different countries. Their results confirm the absence of introversion and confirms novelty seeking as a very common characteristic in the target population. My study attempted to determine whether similar results can be obtained using a standardized personality survey rather than self-evaluations in simple questionnaires.

To test my hypotheses, I used two personality surveys which compared the results to the mean population using the Student t test. One is the Temperament and Character Inventory-Revised (TCI-R: wiki) Cloninger. The second is the Revised NEO Personality Inventory (NEO PI-R: wiki) of Costa & McCrae.

The sample consisted of a total of 50 subjects, all masculine, some of whom responded to both questionnaires (32) and some only one (TCI-R: 38; NEO PI-R: 44). Note that this is a convenient sample since all these subjects are final year students of computer science who participated in a specialized coaching program of my design for which access was conditional on performance in an entry test.

If the results shed light on our question, they also give some interesting insight on other matters. First, with the TCI-R, there is a strong difference in the temperament “Novelty Seeking” (p = 0.000), which confirms the hypothesis that the developers would be eager for the new, this is not very compatible with introversion. One can also note an interesting way that the character “Self-Transcendence” which is associated with spirituality, is also significantly different from the mean population (p = 0.037), despite the fact that the sample was culturally eclectic.

Developers TCI-R Results

The results from NEO PI-R found one statistically significant difference in the factor “Neuroticism” (p = 0.015) which determines emotional stability. This could be explained by the fact that most of the sample consisted of placed (ie successful) students. One can also note the interesting way that extraversion is a slightly above average level, which also confirms the hypothesis.

Developers NEO-PI R Results

My results show that, for my sample, the dominant personality type is far from introversion. In fact we can demonstrate a clear difference from the general population in the dimension of novelty seeking, the opposite of introverted behaviour.

It is possible that the origin of this stereotype comes from a misunderstanding about what introversion is. The concept could be confused with another facet of the personality: ie sociability. But even here, the results of my survey do not support the idea of introversion (Agreeableness in the NEO Pi-R or reward dependence of the TCI-R). I think the genesis of this stereotype is in the very nature of computing that by its apparent complexity, may have created a divide between IT and others, in specific contexts, and perhaps even at different times. To test this idea would require the development of a larger study.

With the limitations of this study, one might assume that the difference is only statistically significant due to a sampling problem. Indeed, the subjects were all involved in an internship supervised by a center for innovation and creative projects. One could easily imagine that these subjects have applied to do their internship with us because they were originally already very interested in innovation and new technologies.

In conclusion, I would say that this study and the various initiatives in this area show that studying this particular population could prove to be a very interesting opportunity. The financing of an international study could also be possible: most customers mentioned on the website Evans Data Corporation consist primarily of multinational technology companies like Microsoft, IBM, HP, Google or Adobe for example. It seems to me particularly appropriate to find collaborators with one foot in the industry of these companies and the other in an academic institution doing research in psychology….

Want to participate in one of my future studies? Please fill in this form and I will contact you: http://bit.ly/psydevform

References

  • Avram, A. (2013, Février 20). Are Developers Introverted or Extroverted? Are They Intuitive or Logical? Retrieved from InfoQ: http://www.infoq.com/news/2013/02/Introverted-Intuitive-Logical
  • De Fruyt, F., Van De Wiele, L., & Van Heeringen, C. (2000). Cloninger’s Psychobiological Model of Temperament and Character and the Five-Factor Model of Personality. Personality and Individual Differences, 441-452.
  • Hansenne, M. (2014). Entrevue & echange d’emails. (P. Mengal, Interviewer)

Hope, confirmation bias and entrepreneurs

Entrepreneurs are often trapped in a vicious circle of hope.  Hope clouds judgement and can be what prevents the entrepreneur seeing things clearly and taking the appropriate decisions.  Hope is so seducing that it’s what is used in most personal development or “get rich in x lessons” books.  Hope is also so powerful and rewarding that it is fed by proofs generated by the confirmation bias, that is itself fueled by hope. Entrepreneurs must learn its underlying concepts and learn to get objective opinions from others.

Confirmation bias refers to a type of selective thinking whereby one tends to notice and to look for what confirms one’s beliefs, and to ignore, not look for, or undervalue the relevance of what contradicts one’s beliefs.  [Skeptics Dictionnary]

Before talking about confirmation bias, let’s understand what is hope from a psychological point of view.  Hope is one of the many mental defense mechanism we have that is triggered in order to disconnect us from hurtful emotions (anxiety, sadness, despair, ….).  It uses thoughts to construct a positive scenario of the future.  People often grab these thoughts as a life buoy to avoid the reality of the present moment. We usually hope for a better future when we are  uncomfortable with the present.  Hope is what drives many wantrepreneurs.  Hope will make entrepreneurship’s book writers rich, not you.  Happy people hope for the best, once, then stop thinking about it.

In order to fuel hope, you need proofs that what you see in the future is possible and is likely to happen. This is when the confirmation bias enters into action. Every single proof you see that makes your predictions credible is highlighted, while every single piece of evidence that it won’t is denied. Those proofs stimulate hope that itself forces you to suffer from the confirmation bias. It’s a vicious circle.  One great example of the relationship between hope and the confirmation bias is seen with believers of the 2012 end of world event.  They can point to you dozens of  scientific  studies that prove it will happen, ignoring the hundred other proofs that it won’t.  One seducing thought in the 2012 case is that the event can potentially make them better than they are now.  Hope is triggered by a desire for change and fueled by confirmation bias.  Desire for change is not the only way to trigger hope.  Any unwanted emotion, such as fear, can also be a very good motivator: some religions claim that if you are not a good practitioner, you won’t go to paradise, but burn in hell forever. With entrepreneurs the desire for change is the key to the process.

Hope & Confirmation Bias

Conscious thinking is not the only factor: things get worse when we take into consideration theories of biological psychology.  Desire to change is not the only motivator for hope. There is a physiological reward for the behavior. Dr. Robert Sapolsky, professor of biology and neurology at Stanford University conducted experiments that showed that there is a direct link between hope and dopamine (pleasure hormone) releases in the brain.  Studies even show that we get higher dopamine releases when there are more uncertainties. Uncertainties? That is certainly something that entrepreneurs can relate to.

The ability to cope with temporary difficulties is one of the entrepreneur’s required abilities. Too many people can’t go through what Seth Godin called The Dip.  Defeatism, the opposite of hope, is one of the obvious failure factors in entrepreneurship.  Often they decide to stop.  They think their efforts are not worth it anymore.  Defeatism works just like hope. Your judgment is biased and you see things from the negative side.  It is also fueled by the confirmation bias in the same way.

The Dip Illustration

Optimism is often suggested as a strategy to fight defeatism.  It’s true that if you tend to be defeatist, hope can counter balance the feeling by replacing negative thoughts by positive ones.  It’s positive psychology. And it’s even suggested by the Emotional Intelligence guru Daniel Goleman.

Having hope means that one will not  give in to overwhelming anxiety, a defeatist attitude, or  depression in the face of difficult challenges or setbacks. [Daniel Goleman]

Just as defeatism must be avoided, hope can’t be a strategy on the long term. It can even  hurt as much as defeatism.  Do you remember how you felt after you really hoped that something would happen and it did not? You would be really surprised if you noted each prediction you make and compare them with what actually happened.  Hope will play against you.  It will hurt, it will hide a more concrete problem, and more importantly, it will bias your judgment.

Self awareness, again, is the key. To manage the process, consider hope (or defeatism) as a signal you must decode by being aware of the physical and psychological mechanisms. If we suspect we are biased, we must not believe entirely what we think and assess every hypothesis we make with realistic information.  Market studies and/or customer development are ways to assess our assumptions, as well getting mentorships from more experienced people who have learnt the hard way. With some practice, you will be able to acquire the self awareness required to avoid being trapped in those loops. Be a mindful hacker.

 

 

If you know neither the competition nor yourself, you will fail

There are two common types of behavior I have noticed in people regarding their competition. You have the competition driven people, and the competition denial people .

The first constantly monitor their competitor’s activity by visiting their websites & forums or googling them constantly, while the latter meticulously try to avoid any piece of text mentioning their names.  Sometimes, the latter behaves like the former by accident, triggered by some source of information they have read by mistake… Both emotionally driven behaviors are very dangerous for their business.

A third widespread behavior, linked to the same concept, is to not enter a market because of the presence of one or more competitors or even worse, entering a market without studying the competition in detail. All these types of behavior will make your business decisions weaker. The solution is to take your competition for what it is, no more, no less and try to intelligently decode your emotions and thoughts (both are linked).

When someone is competition driven, he will often take any of the competition’s initiatives as something he doesn’t have (and therefore he must have), instead of taking it for what it is: an initiative that can be equally good or bad.  The resulting action is to try copying the competitor’s ideas (but perhaps doing them better…). The problem with that behavior is that the competition will always be significantly in advance and the market will inevitably perceive him as the follower, not the leader, constantly competing on basic stuff such as features & pricing.  It’s not only his innovation that will be affected, but his overall entrepreneur’s ability to take good decisions.  Being so abnormally obsessed by a competitor can be the start of very serious trouble possibly leading to burnout or worse, to the abandonment of the venture.  His fear of competition is irrational.  Many of his important decisions will be biased by his distorted perception of his challengers.  It’s not how he should develop a wealthy company.

Competition denial on the other hand, is the action of ignoring the competition completely. This is a very commonly suggested strategy to solve the results of competition driven behavior. That is, by ignoring anything the competition do, his judgment, innovation & decisions are not affected by what they are doing. After all, he may say, listening to the market and customers is the only valuable thing to do.  Instead of being competition driven, he becomes customer driven.  It’s a seducing way to drive your business, but ignoring the opponent can hurt you as much as being obsessed by them.

If you know the enemy and know yourself, you need not fear the result of a hundred battles.  If you know yourself but not the enemy, for every victory gained you will also suffer a defeat.  If you know neither the enemy nor yourself, you will succumb in every battle.

Sun Tzu, The Art Of War

This is something I learnt from practicing martial arts. Before an important fight, you must watch videos of your opponents previous battles (or previous fights in a competition).  Watch how he moves, what he is good at, what he sucks at.  It will helps you develop effective ways to attack him and defend yourself.  Strategy is not the only important thing in a battle.  Attitude can make all the difference.  If you have a very determined opponent in front of you, this will affect your morale during the battle, and therefore your performance.  Invest in strategy by analyzing the market, including competitors, and in self confidence.  The former will help your differentiation in the market.  The later will surely contribute to making your competitor fear you, leading them to one or both of the unwanted behaviors described in this post, and making you the leader. Maybe.

Controlled input: the missing piece of time management

In this post, I’ll talk about a problem that affected me personally really badly and that I see in too many other fellow entrepreneurs & developers.

Twelve years ago I thought that increasing my productivity would solve my problems.  It did the exact opposite.  My problems did not disappear.  They became bigger as I became more highly productive.  Until I learnt that I was missing an obvious piece of perfect time management: commitment or what I prefer to call: controlled input.  For those who don’t see the obvious coming (like me a few years ago), this is for you.

In 2000, I started freelancing as I was hit by the entrepreneurial fever. Very quickly I became overwhelmed by work and projects. Sometimes, I had to completely stop moving and think “what is it you are doing?”  I was doing 3 things at the same time, in addition to reacting to every external disturbance such as phone calls. That’s when I decided to invest in something I hadn’t been taught at school or by my parents: organizing myself.  At the time, delegating was out of my reach.

I purchased top rated books on the subject and went to training courses. I started to learn and to put everything into practice.  Productivity increased dramatically.  I became an unstoppable working machine.  In the less than 2 years that followed this, I was able to create 5 companies (with the satisfaction that all still exist today) in addition to freelancing and working on the numerous side projects I had.   This was made possible with increased productivity and the fact that almost 95% of my conscious time was spent working. I started to earn a lot of money, more than I could handle.  But all of this had a price: I became like a zombie and eventually, I burnt out.

I had missed something very important that I hadn’t learnt how to manage yet: my commitments.  I was tempted to say yes to everyone, and more importantly, to myself.  As an example, any new idea I had would be turned into a new company, immediately.  I finally learnt how to solve that problem the hard way.

When I talk about it to friends, employees or students, I use the illustration of the tap and the funnel. The tasks coming in flow from the tap, your input, while the bottleneck of the funnel is you, your maximum output, your productivity. What’s in the funnel is your commitment.

Funnel

Below is an illustration of three possible scenarios.

  • Overwhelmed: you have too much work and you can’t face it. The fact you are overwhelmed affects your productivity negatively because of stress and other technical factors, eg having to multi task. Not to count the waste of unfinished tasks (or low quality).
  • Increased Capacity: you decide to learn GTD to increase your productivity. It works, you have a larger bottleneck, but you are still overwhelmed. You do more with the same time.  By your new behavior, you teach others (and yourself) that you can do even more. Instead of solving your problem, this actually worsens it.
  • Controlled Input: you control both external solicitations and personal commitments. Input is controlled and matches your capacity. Everything is under control. This is a part of self-awareness.

Funnel Details

Properly or improperly managing your commitments has many other effects, for example – trust. The more commitment you fail to meet, the more you teach others (by conditioning) that you are not reliable. They will progressively lose trust. Everything you say will be seen as something said by the unreliable guy. It works both sides: if you succeed in meeting almost every commitment you make, you will teach others that you are very reliable. You will build trust and increase your circle of influence. This includes trust in yourself.  

Here are few ideas on how to manage input:

  • Deadlines set by others: in the developer’s world, we often face situations in which other people set deadlines for us. When I face such situation, I re-estimate the task myself and compare it to my actual commitments. If there is a difference, I confront the person who set the deadline. In short, I learn to say no, but with a proper argument. Saying no without any explanation is not only rude, but unprofessional.
  • External disturbances: I’m always amazed when I see someone looking at his ringing phone saying: “oh no, not him, he disturbs me all the time“. Why not simply ignore the call? You are NOT committed to answer the phone, you can call him back at a better time. This statement is valid for everything including emails. They can wait another 3 hours to get an answer, right? In addition, these interruptions are real productivity killers (Nass, Ophir, Wagner 2009).
  • Ongoing projects: Limit your ongoing projects. Don’t involve yourself in two big projects at the same time. I limit myself to one large project and one or two much smaller ones. In order to do that, I put every idea or thing I would like to do on a list. I update the list often with new stuff, but nothing goes out of it until I have the free room (time) for it.

Increasing your productivity is very easy. The techniques work and are easy to learn. The hard part is learning to say no. To others, but also yourself.  If you are like me, it will take some time to be completely healed from this bad behavior.  But being aware is certainly one big step.  Be productive, control your input, be happy (Oswald, Proto, Sgroi 2009).


 

 

 

The actor–observer asymmetry

There is a difference of judgment in any activity depending if we are an actor or an observer (Malle, Knobe, Nelson 2007). I realized this fact when I was practicing martial arts, long before my studies in psychology. The audience, usually parents, downplayed the difficulty of the discipline and very often judged the activity incorrectly. One of the most common expressions was, “but it is only dancing.”

This comment annoyed many of the new practitioners, but not the more experienced who knew that this was a gross misjudgement. You could see the difference when these same people decided to try the discipline once. As you can imagine, most did not usually go beyond the first attempt as it was physically demanding and much more difficult than they had imagined. Their judgement of the activity observed radically changed when they became an actor.

In our everyday life, we alternate between the positions of observer and actor. In both situations, we make judgments and many of the various decisions we take are based on them. As an observer, we can judge incorrectly the behavior of others. As an actor, we take into account the judgment of observers. It is important to recognize these situations and act accordingly.

Let’s take a concrete example: starting a software business. This is a subject dear to my heart as I far too often watch the growing disillusions of entrepreneurs when they take in the reality of things. As the disillusionment grows, only those predisposed (very rare) or those working for pleasure keep going.

The reality is that entrepreneurs work very hard and take on tasks that we would probably never have agreed to do as an employee.  You must also be aware that very few projects succeed at first; most stop after the first failure.

When we see talented entrepreneurs in the newspapers, we think it’s easy and as simple as registering a domain name, writing 10 lines of code and then selling your company for millions. The reality is that most successful people have worked hard, often in physical and psychological pain, for many years and have faced all kinds of problems. They made the difference by persevering. All my successes have been preceded by hard work and pain.

It is difficult to realize this when you have not been there yourself. But how do you know before you make a judgement or take an important decision?

The first step is certainly to become aware of any bias and take it into account. It is very difficult as these behaviors are unconscious and judgment is rooted in how we operate.

When you realize that you are in an observer’s position and about to judge and eventually take a decision, you should not be satisfied with the information available. As you may have noticed, observing is insufficient to get an opinion, even if one is aware of the bias. Failing testing it yourself (and you can’t start a business as a test in the same way you can attend a single martial art class), the only solution is to ask questions of the actors. Those who have experience in the field or are in the situation. Do not just question one of them. The more people you question, the more relevant your information.

The only concern with this approach is that it can block you from moving forward. Indeed, if you ask too many questions you can start to stagnate. Everyone now knows that one of the primary qualities of an entrepreneur is his ability to move forward quickly. Many are also characterized by a certain impulsive trait which will be discussed in a future article.

Awareness of the actor / observer asymmetry is directly related to critical thinking: identifying biases, separating fact from opinion and analyzing data. Awareness of our mental functioning is, again, the key.

Critical thinking for developers

You gain knowledge from information coming from many different sources including books, articles, blogs, conferences and all the discussions you have with other professionals. Being able to interpret, evaluate, assimilate, synthesize and apply the data you collect is called critical thinking and is an essential skill for anyone (including the developer).

“A persistent effort to examine any belief or supposed form of knowledge in the light of the evidence that supports it and the further conclusions to which it tends” Edward M. Glaser

The word critical derives from the Greek word kriticos, which means discerning judgment. The roots of critical thinking come from analytic philosophy (Greek Socratic tradition) and pragmatist constructivism (Buddhist teachings).

In this article, I’ll try to isolate the 3 most common steps to practice critical thinking which is similar to scientific skepticism.

1. Identify potential cognitive bias

Cognitive bias, such as the confirmation bias, is a pattern of deviation in judgment that occurs in particular situations. Everybody is more or less affected by it. The more you know about those biases, the less likely they are to affect your judgment negatively. Here are some well-known cognitive biases:

  • Confirmation bias: the tendency to search for or interpret information in a way that confirms one’s preconceptions.
  • Mental filter: inability to view positive or negative features of an experience, for example, noticing only a tiny imperfection in a piece of otherwise useful clothing.
  • Gambler’s fallacy: the tendency to think that future probabilities are altered by past events, when in reality they are unchanged.  This results from an erroneous conceptualization of the Law of large numbers. For example, “I’ve flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads.”
  • Overgeneralization: Extrapolating limited experiences and evidence to broad generalizations.

Sometimes journalists, politicians and even experts are affected by the overgeneralization bias and write like this: “The scientists confirmed global warming”. Try to replace words like “the scientists” or “the experts” respectively by “some scientists” and “some experts” which usually reflects the reality. It will give a very different meaning to the text. Be aware that identifying another’s bias is easier than identifying your own and don’t forget some people will use tricks to consciously manipulate opinion.

2. Separate facts from opinions

Anyone can post anything online and this is a great opening for narcissist leaders and other fake experts with extrovert personality.  The internet is full of information coming from these sources and a lot is based on opinions rather than facts. A critical thinker is able to separate the two.

You will prefer references to recent scientifical studies. Serious papers will reference multiple sources. But as you will see in next point, mentioning references is not a guarantee that the information and its interpretation are correct.

Always verify the credentials of the experts.  Has the business expert only created one or two businesses or has (s)he created several ones facing difficulties? Is it easy to find information about them or does any data about their past seem hidden or difficult to reach?

3. Analyze the data

To be reliable, the source must be based on empirical data that is produced by observation or experiment. The theory based on the experiment must be refutable.

“A theory which is not refutable by any conceivable event is non-scientific”. Karl Popper

In human sciences, any experiment which aims to define a theory or methodology should be reproducible in at least 95% of the attempts. A good example of non scientific theory is the Freud’s Oedipus complex in psychology. There is no way to refute the theory because Freud states that if the behavior doesn’t appear, it’s because it is repressed.  There is no way to validate or invalidate the theory. Even if there is a possibility than the theory is true, there is no way to verify it and so it should treated with care.

Here are the research methods commonly used in human sciences:

  • Observation: usually the first step of research to attempt to identify potential causes of a behavior.
  • Surveys & tests: if you can’t observe thoughts, we can ask people to describe them. The problem with surveys is that you can’t be sure that the answers are correct.  Social desirability bias, demand characteristics, memory errors are some of the problems you will encounter in addition to the sampling bias. After that, when you interpret the results using correlational approach, it’s impossible to prove that changes in variable A causes changes in variable B. At best, this method can be used effectively to describe or predict a behavior (what), but not to explain it (why).
  • Case study: this is the most popular research method in software as it is easy to do. Observe a few persons and try to determine a pattern. You can’t really prove a causal effect,  just like with surveys. But like observation, this is a good first step for the experimental approach.
  • Experimental: the experimental approach is the only type of methodology that, if well conducted, is able to make causal statements.  They are very difficult to carry out, especially in the field of software development. A well conducted experiment will include the following:

Have you ever read a book written by successful entrepreneur or software developer that converted his own and unique experience into a methodology? Your critical thinking would force you to evaluate the methodology by calculating how many successes have been made out of the millions readers. How many of these would have been successes anyway even without applying the methodology? As a reader with critical thinking you will be able to take what is useful in the book and leave the rest for what it is: a case study at best, observation in most cases. This applies, of course, to any source of information.

But even the results of well conducted studies can be wrongly interpreted, consciously or not, by the person that mentions it. Sometimes it is even conscious: a great example is how some people are caught lying with statistics. Politicians against the decriminalization of marijuana claimed that studies showed 87% of heroin addicts started by using cannabis. Cannabis would  therefore lead inevitably to hard drugs. What they forgot to mention are the millions of people smoking cannabis that never use heroin. The information is true, but manipulated.  In fact, we could present a study that demonstrates that 100% of heroin addicts used coca-cola! Should we prohibit coca-cola?

The 3 steps

Developing a critical mind is not easy, and we must be prepared to accept that a certain number of our current beliefs are wrong. To summarize, here are the three steps to follow to ensure you won’t be intoxicated by the information you gather:

  1. Step 1: identify potential cognitive bias.
  2. Step 2: separate facts from opinions.
  3. Step 3: analyze the data.
To learn more

How much of your current knowledge and beliefs are opinions rather than facts?

Check the cognitive bias list on Wikipedia to learn more about them. You can read more about the different methods summarized above on this page.

Book recommendations:

            

8 reasons why you shouldn’t rely on source lines of code as a software metric

The estimate of the value of production of software based on the number of lines of code (LOC or KLOC or SLOC) is as popular as it is controversial. The main criticism is that there are too many factors influencing the final measurement value. Robert E. Park (1992, page 140), software metrics specialist & staunch defender of the method, responded to critics with the following:

“When we hear criticism of SLOC as a software measure, we are reminded of a perhaps apocryphal story about a ditch digger who, when asked one day how he was doing, replied, “Dug seventeen feet of ditch today.” He didn’t bother to say how wide or how deep, how rocky or impenetrable the soil, how obstructed it was with roots, or even how limited he was by the tools he was using. Yet his answer conveyed information to his questioner. It conveyed even more information, we suspect, to his boss, for it gave him a firm measure to use as a basis for estimating time and cost and time to completion.”

Originally, this technique could probably be used in the conditions mentioned by the Park. Later models such as COCOMO (Boehm 1981) also allowed developers take into account a number of parameters whose variability was probably reasonable at the time. But since then, the number of factors affecting the number of lines of code has become so important that it is very unwise to take this action seriously both in the evaluation of the software and the productivity of the design team.  I will try to illustrate the problem using eight arguments.

1. Different languages and different frameworks

Today hundreds of different languages exist (Wikipedia 2013).  For each of these languages there are several frameworks.  For the same functionality, there may be a very different number of lines of code produced depending on which technology is chosen.  In addition, modern architectures use different technologies, which further complicates calculations. Correction factors exist but they hardly seem defensible given the wide variety of types of applications that are being developed today.

2. Experience and competence of the developers

We must also take into account the experience of the developers involved as this may affect the calculation in many ways. A very competent developer often writes fewer lines of code than other less experienced developers because they will use design methods created for the sole purpose of reducing the number of lines and increasing readability and maintainability. In addition, they are more competent with the functionalities offered by tools (technology stack). Indeed, through ignorance of these, many programmers rewrite existing code, greatly increasing the number of lines of code.  In this regard, many experts in the area do not hesitate to speak of “lines of code spent” as opposed to “lines of code produced” (Dijkstra 1983).

3. The practice of refactoring

The fact that the same piece of code can change over time with the refactoring (reworking of code) can skew the results. This practice reworks the source code without adding functionality (Wikipedia 2013) and it is becoming more common because it can increase code quality and reduce technical debt. This can cause unexpected situations: if many developers practice this technique while the lines of code are being measured, the result could give the appearance of a reduction in output (fewer lines of code than in the previous measurement), while it is clear that the opposite occurs.

4. The practice of reuse and / or code generation

The reuse of existing code is very common and highly recommended in DRY (Do not Repeat Yourself). So many parts of the code can be retrieved from a previous project or copied from an open source project, library or another blog post. In addition, modern development tools can automatically generate code for the developer who works with various high level design tools.

5.  Tasks outside development

Activity in the development of software is not limited to writing code on a keyboard. In fact, many other tasks are needed to produce quality code. Here, a high variability can emerge according to the different methods used, the composition of the team or the documentation requirements.

6. The reliability of the measurement tool

A wide variety of measurement tools are available on the market. Given the lack of consensus on the method of counting the amount of lines of code in a source file, the outcome may be materially different depending on the tool used.  In addition, certain technical problems can arise when it comes to identifying what should actually be counted or not. For example, some software has difficulty differentiating comments from instructions when they are mixed (Danial 2013). The efficiency and quality of those source line counter is also very variable.

7. The (potential) manipulations

When a measure may have an impact on one or more person, we need to consider the possibility that some of them try to manipulate it to their advantage. Thus, if the productivity of a developer is measured based on the number of lines of code (or functions), they could very easily manipulate the source code to inflate the results. This problem is very common in companies that use KPIs to conduct assessments of their employees. One can also easily imagine a company trying to maximise the numbers if they know they will be evaluated based on this metric.

8. Time

Almost all the above elements are time sensitive. For example, the competence of a developer does change with practice (this includes the famous learning curve). More features of languages ​​and frameworks are also evolving to increase the productivity of the developers. The longer a project takes, the longer the measurement will be sensitive to this bias

Conclusion

In conclusion we can say that estimating the production effort or value of a program using this software metric is very risky. However, this technique is widely used. Some estimate experts such as Steve McConnell (2006) are very aware of the ineffectiveness of the method but still use it in the absence of anything better. Other methods based on “function point” (business functionality) have attempted to resolve some of the issues addressed above, but the values ​​remain highly correlated with the number of lines of code (Albrecht 1983).  For me, the information obtained by these metrics, and anything based on them, should never be considered as reliable and should be used with great caution in your decision making process.

Note: Some of the information in this text come from the fruit of the research I have done for LIEU (Liaison Entreprises-Universités) a network of valorisation units of Universities and colleges of the Wallonia-Brussels federation.

References

Albrecht, A. (1983). Software Function, Source Lines of Code, and Development Effort Estimation. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1703110&searchWithin%3Dp_Authors%3A.QT.Albrecht%2C+%2FA%2F.J..QT.%26searchWithin%3Dp_Author_Ids%3A37850740200

Boehm, B. W. (1981). Software Engineering Economics.  Englewood Cliffs, NJ. http://userfs.cec.wustl.edu/~cse528/Boehm-SE-Economics.pdf

Danial, A. (2013). CLOC Limitations. Retrieved the 2 august 2013 from http://cloc.sourceforge.net/#Limitations

Dijkstra, E. W. (1983). The fruit of misunderstanding. Retrieved the 2 august 2013 from http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD854.html

List of programming languages. (2013, July 30). In Wikipedia, The Free Encyclopedia. Retrieved 12:48, August 2, 2013, from http://en.wikipedia.org/w/index.php?title=List_of_programming_languages&oldid=566431816

McConnell, S. (2006). Software Estimation : Demystifying the Black Art.Microsoft Press. http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351/

Park, R. E. (1992). Software Size Measurement : A Framework for Counting Source Statements. http://www.sei.cmu.edu/reports/92tr020.pdf

Réusinage de code. (2013, juillet 5). Wikipédia, l’encyclopédie libre. Retrieved the 12:04, august 2, 2013 from  http://fr.wikipedia.org/w/index.php?title=R%C3%A9usinage_de_code&oldid=94719037