Science and Critical Thinking in the Information Age

It is evident today that the sheer vastness of information available about practically any topic is barely comprehensible.  This is due to both an exponential increase in both the quantity of material produced and it's availability to ordinary people anywhere in the world. Most of the credit of course goes to the development and spread of the Internet, but also to the downstream effects it has caused. The availability of a nearly costless distribution platform has incentivised a growing industry of content creators (think of bloggers, wikipedia contributers, et al) that would otherwise have been dissuaded by the gated community of the legacy publishing industry. 

The result of this rapid unorganised growth is a body of information that is largely unfiltered, uncategorised, and unconnected. Search engines solve one crisis by being like a universal index (and in certain cases, a glossary), but they are only effective for locating information when you know exactly what you are looking for, unable to constructively assist when you require an idea, information to form your own opinion or an overview of the landscape.

Let's think of the experience of walking around in a dense rainforest. If you were looking for a particular flower, an army of manual labourers (search engines) could undoubtedly scour the land looking for it and find it faster than you alone. But if you wanted to ascertain whether your the breeding of an insect was affecting the spread of a plant species, that same group of inert labourers who can only follow simple instructions wouldn't be able to assist you with that kind of task. The most that they could do was follow simple instructions to quickly bring you certain types of plants of your choosing and leave you an abundance of information to make your decision.
This is what search engines do in a simplified reality, even excepting for things like the results being biased by the buying power of the result hosts (akin to labourers getting you large numbers of false flowers because a biased party wanted to influence your decision in a particular direction).

The critical skill necessary here now is not the ability to gather or retain information, but the ability to parse through large quantities of it (of somewhat questionable quality) and draw useful conclusions.  This is what is commonly called and I will continue to refer to as 'critical thinking'. 

If we look at an education as the process of equipping ourselves with the toolkits using which we generate value in the world for which we receive compensation, then the efficacy of parts of the current system of eduction is serious doubt. By equipping students with a skillset (data memorisation and regurgitation) that has almost no value anymore, it cripples the student's ability to be valuable and leaves them hanging dry in the department of preparation for an actual career. 

 The skill of critical thinking is not only relevant in the workplace, but increasingly critical in the general populace with the spread of fake news and blatant exaggeration and misinformation from what used to be sacred sources like the mainstream news media and elected officials. The ability to be constructively skeptical grants a degree of immunity from being hoodwinked by misleading claims or statistics. The ability to at least ask the question of "why is/isn't this arguement credible" and remember to apply it not only to the big questions but also the little ones we come across every day. It could be an unbelievable advertisement in the newspaper or a claim about a shocking new medication forwarded to you by a friend.  
Having the skill of critical thinking allows one to perform these cursory analyses routinely and almost subconsciously, which is necessary for their effective deployment.  Because the beauty of the technique that it doesn't have to yield all the answers to be useful, but simply act as a sixth sense humming away in the background. And in a world where most people simply believe what they see, having it is a veritable superpower.

 - "For in the country of the blind, the one eyed man is king."

 In theory, the process of science is really one of skeptical thought and subsequent experimental verification, applied to claims about nature. And the emphasis on process. Not the body of knowledge, but the process. And the first part of that process is exactly this kind of critical thought that asks the kind of basic 

Using Money Well - Removing a Negative

The conundrum that strikes every detailed thinker that starts to earn enough to have disposable income is how to spend it in a way that positively enriches their lives. But care must also be taken  to avoid the consumerist mind trap of starting to believe the assumption that your life can be made better by adding things (that you can buy).

Everyone has done this at least once with Monopoly money

Everyone has done this at least once with Monopoly money

Instead of nebulous advice about the relation between money and happiness, I came across a useful corollary to the overused adage (in a Tim Ferriss podcast with Mr Money Moustache http://tim.blog/2017/02/13/mr-money-mustache/). 

The thought-provoking line was, " Instead of using money to add more positives to your life, use it to first remove the negatives.  
This simple shift of the goalposts (or rather simply repainting of it)  brought about a flood of useful ideas about how to spend the money, and I invite you to try the exercise right now as well. 

Gave it a thought? There are some useful examples in the podcast as well, but some of the easy ones that I came up with were: 

  • Improve commute experience when necessary
  • Spend on tools that improve work efficiency because they pay back in the long run.  
  • Consider hiring help for business admin work that you detest.  

I've found a lot of these follow the theme of exchanging money for time, because in almost any body's calculus time is a far more valuable resource.  

Especially for people who have just started earning, parting with that money for non-essential things might seem treacherous to do, but a lesson I quickly learned was that the best strategies for making money grow were to let it go into channels that allowed it to multiply. Simply storing it doesn't have much more than an additive effect but loss aversion is a hard demon to fight. 

When you hear the phrase "Never gonna give you up", it usually not a good sign.  

When you hear the phrase "Never gonna give you up", it usually not a good sign.  

Investing it, (especially in yourself early on) seems to reap much greater returns in the long term. Whether that's through education (formal or informal) recreation, or just simply improving the quality of your life. 

Is curiosity a luxury?

There are few traits more common among good scientists than curiosity. Almost by definition, people working in the field of finding out new things must have the desire to seek out new things. And yet, everyone treats it like a congenital gift. You are either a curious person, or you are not. I've rarely heard anyone say they would like to become more (or less) curious.

Even still, I googled the question "How do I become more curious?" and all the answers could be summed up as :

  • Ask questions
  • Ask more questions
  • Keep asking questions
  • Be humble and persevere 
  • Ask a few more questions.

And indeed, the act of questioning is central to curiosity. Humility is also important because one does not ask questions if one thinks they know all the answers. But today I want to probe what the circumstances are that lead people to be curious. An easier group to investigate would be children up to school age, because they all go through basically the same routine till age 15. We know that young children are naturally curious; their every behaviour is exploratory, their every action an implicit experiment, the little tykes are learning machines. But as those same individuals leave their secondary schooling, you start to see that natural instinct wear away in a large percentage of them. In the quest to increase the working scientific community and improve science literacy in the general population, this is a systemic catastrophe. How does our education seem to dull the very blade it seeks to create, and how do some sharp ones still emerge?

There are a few more qualifying factors that might help us tease open this mystery. But first, a little background. 
One of the ways I channel my drive for science communication is by being a part of the outreach team at the Tata Institute of Fundamental Research (TIFR), an advanced research facility in my city. A few times a year, they invite the public into their campus and labs to showcase the kind of work that researchers do, how and why they do it. A large portion of the audience is usually school and college age students, and I frequently assist in escorting the boisterous groups through their visit. Over the years, I have had to opportunity to observe a not insignificant sample of kids in this environment, and some of the observations I have made from this have prompted this line of questioning. 
There have always been distinguishable groups of people, some who did have this elusive trait of curiosity and seemed to probe their surroundings with the prongs of their questions, and other who for whatever reason didn't seem to exhibit the trait. (Notice that I'm consciously not saying that they were not curious; because they may be in a different environment, just not this one).  So what causes this preferential exhibition? Let's take a few pot shot guesses and try to dismantle them (like any good scientist would) and see what we can scrape together from all the remains. 

Let's get the easy ones out of the way first.
Maybe they did have questions, but either lacked the communication skills or the confidence to display it in the situation. This is not an unreasonable thought because I know from first hand experience that TIFR can an intellectually intimidating place to be, surrounded by people who objectively know more than you. But categorically, if you never express your curiosity and attempt to find answers to your questions, is there any utility to the curiosity in the first place?

Another hurdle may have been prior education. If you don't know how to box and step in the ring to spar with a competitor, you're going to have a bad time. Considering the sorry state of education in a lot of schools here in India, it's not a stretch of the imagination to assume that they were not well informed about any of the basic fundamentals, or more critically, never told that they could know more than what was in their book/exam or ask for it. A significant number of children seemed to come from this schools that institutionalised this idea within them, and couldn't seem to fathom that they should be questioning the person speaking. 
Here is where some interesting separation started to occur along other well defined boundaries. It was statistically apparent that the groups of children that came from "better" (read: from a higher socio-economic stratum) schools seemed to be more open to the idea of questioning what they saw/heard. (This was by no means a universal phenomenon, and we will discuss the [large] exceptions later) This might sound obvious since I used the word 'better', but let's unpack it.
These schools were not government aided public schools, but more privately run newer institutions. They followed different syllabi, probably attracted better teachers, and as a result created a different learning experience. But before we give all the credit to these uber-schools, there is an important dependency to note. The children with access to these schools came from a completely different socio-economic class that those that attended the public schools. To evaluate the causal factors of the development of curiosity, we must try to isolate the effect of the tuition and the socio-economic background (because one can be much more easily replicated than the other).

It seems valuable to try to unpack what characteristics of the better socio-economic upbringing allow this trait to foster, because maybe it can be replicated for people outside this group as well. 
One would seem to be the likelihood of the parents being well educated as well. The existence of a couple of experienced mentors to guide one through the process would intuitively improve the learning experience, by proving support and direction. 
Children who are first generation learners in their family are less likely to receive this support. But if we know what is lacking, we can try to provide it. Intelligent mentoring could fill the role and provide a role model for the children to aspire to outside of their teachers.
People from the former group also categorically have access to more resources, allowing them to tap better sources of information and direction (like continuous access to the internet, libraries and subject experts). This can be replicated in part for the latter group, but as some of the exceptions will show, it may not be all that necessary.
Another factor could very well be that the safety net provided by a providing family can supply the security required to aspire to pursue less secure ventures for their future careers instead of having to aim for a less ambitious but more stable job that guarantees them standard employment and income. And the pressure to get a job sooner rather than later would rule out exploratory higher education as well.
From a few more empirical observations, it has come to my notice that the school also has a large role to play. By acting as magnets for both brighter students and teachers, they are able to have a far higher than average standard of education, and because they are less pressured to pass standard competitive exams, can focus on the true learning process instead of rote memorization. 

But now, let's visit the exceptions, and see which traits are necessary, but not sufficient. 
It is another frequent observation that students who come from such schools are not remotely interested in the learning process at all, possibly because the safety net provided to them shelters them from the need to perform or work hard. For these people, all the resources and guidance available to them is practically wasted. So clearly just resource availability is not enough, the internal motivation to do it must also exist.
Is the desire and drive to do learn both necessary and sufficient? Not entirely, but maybe. As we enter a more information dense age where access to knowledge is more egalitarian, people with fewer resources can access learning tools previously reserved for the ultra-elite (an excellent example of which is the free availability of lectures from world class institutions of topics of every subject now, sharing the knowledge of the world's best teachers with the world). 

So then the question becomes, where does this internal motivation come from, and how can it be created and nourished?
Obviously the question that all of education is trying to answer, but I'll take a stab at it from my view and experience.
A number of people in my life have asserted that I am an internally driven person in a few areas (like science/parkour). To me, I don't usually think of an abstract force inside me that inextinguishably drives me to do more of these things. I only hazily remember how it developed from being something I found cool, to getting better at it over time and through work, till I reached a point where I no longer needed any external support and sustained itself. Almost as if motivation was a fire that needed to be lit by a spark, nurtured in it's early stage, provided with adequate kindling early on and protection from stray breezes (because that's when it's most likely to be easily extinguished). Once the fire is well and truly going, it is mostly self sustaining. It still needs more fuel every now and then, but by then its large and warm enough for itself and to light a few other fires as well.

So if you feel that fire within you about something, light a few more.

candle_gif

Death and the Internet

Wandering along an empty main road, I walked along the divider. No cars, no noise, cool breeze drifting along. The sun hung like an oil painting, almost lazily. The atmosphere was that of a set without any actors. A lonely man sat restlessly against a tree. The wind rustled a few leaves into his lap. He was content. 

Standing in the middle of a street, even with no cars in sight, is a slightly uncomfortable endeavour. The years of conditioning that this is not a place to pause leave you with a slight twitch in the stomach, nothing painful, but just enough to remind you that something is not right. But like all urges, it eventually subsides. 

A lone cyclist is visible in the distance, glancing at his surroundings. Two people with nowhere to go experience a strange attraction. We chatted for a while, about the obvious lack of happening, the joy of solitary exploration, and the verisimilitude of it all. Parting ways with a glancing smile, we continued on our journey to nowhere.

It was the day after someone died.  

Not just any someone, a particular somebody with a long beard and orange clothes. A lot of other people with orange clothes were not happy with his deadness, and so they started shouting at things. Nothing specific, just a general shouting. 
This disturbed everybody so much that they refused to leave their homes; even those who didn't really care about the affairs of men in orange clothes.

What was puzzling about the entire situation was how violently the people in orange clothes reacted to deadness. There was nothing violent about the death of the one with the beard, or anything controversial at all. But even this completely natural occurance created a frenzy, and the effects were palpable.

Which really begs the question:
Have humans really learned to deal with death?

Of all the aspects of the human condition, death is one of the few, that is undoubtedly universal.
Of the roughly 70 billion humans that have ever existed, only 7 billion are alive today. All the humans of the past have inevitably undergone the process of deadening, from one process or another. There are few things of which we as a collective species are more sure of.
And yet, we don't really act like it.

Very little of our behaviour takes this fact into account. We joyously consume unhealthy food, deliberately inhale poisonous smoke, and lead a sendantary existence mindlessly consuming what the establishment throws our way. Most know that it is detrimental, and most could even be prompted to agree. Most would also say that they don't wish to die. They would probably even do other things to try to extend their living time, and still not make the connection. 

Even though it is abundantly clear that we don't factor in our eventual death into everyday decisions, it is not as transparent why
Beings like that may have some sort of evolutionary advantage, and might lead to greater survival, but it's not too much of a stretch for the imagination to think that this wouldn't be a pleasurable existance. (Or would it?) 

But that is more of idle speculation. 
What is observable and testable, and therefore more interesting, is how death is thought about, right now. 
For those not living under a cultural rock our society has dramatically changed the way in which it connects with the wider world. The advent of social media has, in less than a full generation's time, altered the dominant medium of communication between people, and in doing so, necessitated the formation of new norms for certain behaviours.
A lot of this was visible in the early days of the internet, when bullying and abuse was far more rampant. I think it was mainly because the usual norms for treatment of other people that we are conditioned with growing up didn't effectively transpose into the new medium, where the participants were far more physically detached. The inhuman treatment was prompted by the suble notion that we weren't really interacting with humans.
This has, of course, changed and fortunately improved over time (barring a few cess pools like YouTube comments and certain other forum threads), possibly due to the technology being more accepted into common life. As we grew up with the technology, it integrated with our physical lives more coherently than ever before. All aspects of our physical lives effectively made the transitive step to binary, at least in part. 

But what about death?

This revolutionary change has not been in place long enough to witness the death of its adopters, but that point is not too far away. However, the few anomalous deaths of non-natural causes give us a glimpse of what it might be like. 

One of the first companies to deal with this on a large scale was Facebook. Logically, it would have been among the first, since its product was the most personal of the group. Their current protocol allows users to have a Memorial page for deceased users, where at certain times friends can post comments, condolences and other messages. xkcd also has an interesting What If article about when the number of dead people on Facebook will eclipse the number of living.

But another, arguably more important question may be, what should happen to all their data? Not just their Facebook posts, but their email, their cloud storage, their webpages, or other miscellaneous social media accounts.
We have entered an era where we store a tremendous amount of information about and around ourselves, but what should happen to that information when we cease to biologically exist?

Should it just be stored for posterity, practically infinitely until practically unfeasable or accidentally erased?
Should any data not accessed for say 10 times the human lifespan be automatically deleted?
Will extension of the human lifespan rush into the world with a solution before this problem even arises?

Whatever happens, it's almost guranteed to be interesting. And that is a world I'd like to live in. 

Journalistic Integrity - Is this still a thing?

This line of thought was instigated by the watching of Kill The Messenger, the brief biopic of journalist Gary Webb starring Hawkeye a.k.a. Jeremy Renner. It followed the one man against the establishment, both government and mainstream news, trying to expose a government scandal about CIA and the cocaine epidemic. Emotionally twanging human story, but it got me thinking further about what this implied about journalism. 

Now, the only thing you can walk out of a biopic and know for sure is that the actual series of events was anything but this.
But, keeping that in mind, it still calls into question the entire construct of journalism. Is it anything beyond the spread of truth and the information of the populace? Is there anything that is too true to share ?   

I don't have the first shred of expertise to examine this from a general viewpoint, but I can look at in from the narrower sub-genre of science journalism.
What is it's purpose? To inform the general public about what science is, the work that scientists do, and why it matters. 

Is/Will there be anything in science that is too true to share?  It is definitely not beyond the realm of possibility. I can already think of things that may be societally unacceptable but necessary to the advacement of humanity, like human genetic manipulation, human testing, and other politically controversial matters like embryonic stem cell therapy. Science and maybe even the populace may be better having these things, but is there ever a case where any establishment is justified in its secrecy to withold information about it's use from the public?

Well, when you put it that way...Worth a ponder.