We mostly think of data as something we put in into the systems using electronic forms. Human operators pour in data all around the world. There is also data about data, which is what mostly computers do with their invisible algorithms, and then there is data generated by machines equipped with sensors that measure all sorts of parameters. Referred to as the Internet of Things, this network of devices collects data incessantly streaming them into large databases. This data stream is about to explode, dwarfing the data collected by human operators.
Take the healthcare for example. Traditionally, the data collection occurs at healthcare facilities. You go there, a friendly nurse plugs into you a device that measures your blood, your heart, whatever is needed to help doctors produce a diagnostic. Once you are done, the data stream stops. On exceptional occasions you are given a device to carry with you for data sampling. As the sensors become cheaper and smaller, when it comes to data collection the healthcare industry starts to blur the boundaries between the inside and the outside of their facilities. Carrying monitoring devices will become normal creating a huge amount of data.
Everything that touches our lives will be equipped with sensors: the house, the office, the cars, the roads, you name it. With IPv6, the number of connectable devices is practically unlimited. Someone calculated that 1038 IP addresses will be available. Imagine how that world will look like.
WiTricity, a company that invented a system of charging a battery wirelessly, is considering the idea of powering the cars through mini generators planted into the road. This is an incredible idea. Cars flowing (driverlessly?) through the highway absorbing power from the road as they roll smoothly to their destination will be in constant dialogue with a huge network of small devices designed to identify them, and measure the energy consumption and other parameters. This is a data flood alright.
Who or what is going to handle all this data? Who/what is going to make sense out of all this? Forget about privacy – that will go away anyway – there is no place to hide, but handling this data will be a huge challenge.
On one hand we will use analytical tools to examine data and make decisions. This is a slow process that suits us, humans, to have time to figure out things. On the other hand you need faster decision systems that will respond to situations. Large financial institutions in US are already going through a huge redesign of their organisations by replacing human traders with ultra-fast machines that could execute optimised trades at lightning speed. We will have that adopted in healthcare, in transportation and other areas.
A good question is what are the system design principles we need to adopt in a world of fast computers and of an infinity of networked devices? Do we need to learn completely new skills that allow us to handle the increased cognitive load and to interact with computer systems in radically different ways, skills that are not taught in schools or elsewhere? At the moment, there is a growing disconnect between a schooling system obsessed with assessing numeracy and literacy skills and the transformed world in which we live in. Perhaps the technical system design needs to be merged into a social system design so that we don’t rely on highly skilled analysts and machines for making decisions, but integrate the computer network within higher order social networks.
In a recent article published in New York Times (“Sorry, Strivers: Talent Matters”) David Hambrick and Elizabeth Meinz discuss a research study directed by David Lubinski and Camilla Benbow from Vanderbilt University which demonstrates that people who have higher level of working memory capacity have a distinct competitive advantage in their careers. The correlation between significant successful career and the size of the working memory is extremely high, too high not to be meaningful.
So, if you happen to be lucky enough to have been bestowed by Mother Nature with a large working memory capacity you will have it easy in life. You still need to work hard, but you are likely very successful and not struggling. And what happens if you are not that lucky? You will have to work harder, the conventional wisdom says.
The problem is there are no accessible methods that we can use to assess the working memory. There are no benchmarks. You could find books and magazines and web sites with official assessment kits and quizzes to measure your IQ, but not the level of your working memory.
Let’s assume you know the capacity of your working memory. What could you do about it? I think you could do a few things to improve the odds of success because this attribute is only one condition in your pathway to greater achievements and its influence depends on other factors. Intelligence, good character, capacity for sustained effort and motivation, to name a few other attributes, are all critical elements to success. It is not necessary that all of them are strong, but it all depends how good your strategy is in using them in a smart way. For instance if the level of your working memory is lower, this has an impact on your multitasking abilities and your capacity to handle big chunks of complex information. You could compensate that by controlling your pace. With a slower pace, you could patiently use your strong cognitive skills to process the same complex information, but it just takes a bit longer. If on top of this you use smart cognitive tools then in the long-term you could achieve the same performance levels.
If you perfect your mental frames and have a clear understanding of fundamental cognitive structures, you could accelerate your learning with practice. This was one of the most remarkable observations discovered at the Kahn Academy when processing data with the online analytical tools: students who normally would fall behind in a standard classroom environment, if they take their time to master the study units, they eventually pick up the pace once they successfully consolidated their understanding of the fundamental cognitive structures discussed in the respective study units regardless of how long it takes to do that.
This is a big promise of the education in the future. Our advances in understanding how these metacognitive skills (thinking strategies) could be used to adapt the approach to learning and problem solving to our personal profile with better chances at getting closer to what we are really capable of. Unfortunately, education in its current industrial format just focuses on a simplistic manifestation of our abilities as they are limited to a few skills that rely heavily on the use of working memory. This is mostly obvious in numerical computation and verbal abilities. The traditional literacy and numeracy subjects are the core of the learning and teaching activities targeting systematic knowledge acquisition without achieving mastery of thinking skills. This is reflected in the structure of the formal of assessment programs. Education doesn’t emphasize very much the importance of thinking strategies, which would give young students a valuable life toolset. If we teach the students how to become better at organising their thinking, they will achieve more in their lives, and in the end they will be much happier people because they have a better chance to personal fulfilment.
The relationship between instructional design and constructivists theorists is not one of the friendliest. Sparks fly when arguments break up. So, when I started reading Instructional-Design Theories and Models I expected to find a discourse totally immersed in the traditional instruction based teaching. But this wasn’t the case. Although the main philosophy of teacher driven learning is still there, the tone is very conciliatory, recognising the need of adopting a learner driven approach to learning which is at the heart of constructivist learning theory.
In the Instructional-Design Theories and Models vol.2 a few ID theories are downright constructivist. What a departure from the past. Reigeluth is quite blunt in his characterisation of the education methods from the industrial era: “you couldn’t afford to – and didn’t want to – educate the common labourers too much, or they wouldn’t be content to do boring, repetitive tasks, no to do what they were told to do without questions. So our current paradigm of training and education was never designed for learning; it was designed for sorting”. The whole educational system was based on specific instructions that controlled the learning process.
The reason for this change is simple. There is so much information out there; the technology has had such an impact on everything in society, including the classroom, that the school system cannot possibly create a specific instruction method for each of newly created situations. Besides, one of the highest skills in demand is ability to think independently and solve problems. This requires a teaching method that has to empower the learner to have a say in the learning process.
The Four-Hour Body is not the most brilliant writing by any standard, but sure is entertaining and captivating at times. Tim Ferris will shake your assumptions at least in once, if you have the patience to read the whole book. He put his body and his will power to the test and in the process he learned quite a bit through personal research or through talking to experts from around the world.
I am sceptical about many of his methods. He is young and some of his experiments need time to prove if they are right with bodies that don’t benefit from the regenerative powers of youth. His tendency to exaggerate it is downright dangerous – do not try this at home. However, he has a point: you have to force issues to make progress, and you have to ignore the dominant beliefs that hold us stuck in an inconvenient position and dare to try something different. I like one of his quotes: “Motion is created by the destruction of balance”- Leonardo da Vinci.
I was about to decide I read enough about dieting when my attention was captured by this chapter: Ultra-endurance: Going from 5K to 50K in 12 Weeks (well there are other chapters that are attention grabbers there, such as 15-minute Female Orgasm, Sex-machine and Doubling Sperm Count). I read a few pages where he describes the painful training required to reach the capacity to run 50K. Running 400m sprints and doing weight training are tough mental tests, because you need to have the will power to smash the dislike of body pain in anticipation, before the massive pain occurs. This is the biggest barrier that you must overcome before getting to see how capable you are. I recall the story of someone who visited a mountainous region in Mexico where villagers could run even at very old age without running out of breath. He almost died trying to run a hill and a valley at the beginning in his attempt to keep up with a local. But then they told him he will need to do this for a few weeks and he will be alright. He did it and he could not believe how easy the running became for him and how fantastic was the feeling of freedom that he felt in his new physical shape.
The science of sports has been taken to dizzying heights. Around the world sport training scientific centres prepare their athletes for Olympic and pushing the performance barriers higher and higher. When you read The Four-Hour Body and go through all those scientific details you are at times intimidated (or at least I was) by the depth of specialised knowledge accumulated over the years. It is quite amazing. It is striking how much we know about chemistry, mechanics and biology of the body, robotics, and genetics with spill-over developments in prosthetics, health and performance management. All of this is well documented, tested, measured for everyone to see.
Why is it that similar efforts are not done in other areas of human performance such as learning, creativity or writing? Is it possible to do writing performance improvement similar to Tim Ferris’s method of training for 50K running? Why not? Say, write in short bursts every day, for 12 weeks and then write a book in one go. Has anyone done research on improving creative performance in a systematic way in such form that can be used by average person? There is no Four-Hour Mind book or equivalent out there.
The difference between sports and intellectual endeavours is that sports are a popular business that pays the winners handsomely. It is a huge social entertaining enterprise that has been with humanity since the beginning of its time. Writing is a solitary journey in which only a few excel. The difference between sports and science is demonstrated by this: on Sky News you have half an hour Sportsline four times a day and no science news programs. This is a big blind spot in the way we set out our priorities.
Why did Mark Twain say:
I have never let school interfere with my education
UnCollege is trying to offer an answer.
In a recent report published in Australia reveals how the Living Lab and Interrupting Spaces methodologies have been used to do research into how young people use social networking services.
The argument is that the use of quantitative methods that use surveys and focus groups the research could be tainted by an initial bias based on pre-existing assumptions. The Living Lab method (Leven & Holmstrom, 2008) and Interrupted Spaces (Bolzan and Gale) are more user centric and therefore the focus will be on how users actually operate.
After reading the report, I found it difficult to convince myself if these methods are more “precise” than the widely used quantitative methods. I can see arguments for both sides but I tend to favour the more traditional methods, until further proof is offered.
The key point here is the design of the study is critical for both types of methodologies. On one hand, the argument is that surveys and focus groups tend to make assumptions about how users think and operate and thus influencing the type of questions that are asked. On the other hand, putting the user in the middle begs the question of who the representative user is. How would you know who to invite to participate in the living lab? For that, one has to do a study and use… traditional research methods to establish a user profile that will be referred to by the recruitment process.
The Living Lab method doesn’t scale. It has limited participation, therefore the selection of users must be accurate and offer adequate representation.
In this particular study, Intergenerational Attitudes Towards Social Networking and Cybersafety, a group of young people and a small group of parents have been invited to participate in an experiment to determine if there are benefits for young people if they use social networking services (SNS). It follows, that there are positive benefits and actually parents learned from them during the experiment.
As a parent, I did not recognised myself in any of the adults in the selected group. I also noticed that the experiment started with a preparation phase, which undoubtedly must have had an influence on the participants’ behaviour. I asked myself the question of how the selection process occurred. Was there a consideration of the socio-economic background of parents, geography or demographic profile? Is the pairing young people-adults in the experiment reflecting a real family structure?
The conclusions of the study mirror the assumptions that were used to design the setting of the Living Lab. Perhaps I need to look into a number of such research studies and get a better understanding of this type of methodology. There is a valid argument that experiments organised in a natural context offers a more complete view of the subject of the research study, the design needs careful consideration in order to avoid undue influence of outcome anticipation.
The notion of sharing learning designs in schools is not new. It goes back to many, many years and it has been practiced in form of sharing teaching experience since the beginning of time. Thousands of years ago, in its simplest form, the transfer of learning design as “design knowledge” applied to a certain context, meant copying, mirroring others’ way of teaching students or trainees, meticulously keeping instruction details intact.
One of the oldest practice took place in the military. The learning design was shared by many instructors in an effort to discipline their soldiers. Closer to our times, generations of teachers learned the learning designs and applied them over generations helping student acquire simple knowledge that barely changed over time. Later, while the number of learning designs were quite reduced at the beginning of the industrial revolution, as the education system become more sophisticated, various disciplines started to form learning designs modelled around their history and cultural background differing more and more from one another.
It is only recently that we came up with the idea of learning design as a template that can be replicated as a step-by-step repeatable process and be used as a tool to support effectively the pedagogical practice. Why the need? Firstly, teachers have to deal with an increasingly complex curriculum and performance requirements. Secondly, the body of knowledge has grown so much, something needs to be done to help the teachers do administrative work faster so that they have more time to focus on interacting with students and perform higher order activities.
This is where the technology is both the culprit and the saviour. While it is widely recognised learning design implemented with technology can make things much easier to re-use, the actual sharing could be costly. There are technologies that offer improved solutions (LAMS International), but their patchy adoption prevents major productivity achievements. Education system need to adopt learning design systems across the entire organisation (meaning region, state, country) to make real progress in reaping the potential rewards.