Featured Post

Hacking Health in Hamilton Ontario - Let's hear that pitch!

What compelled me to register for a weekend Health Hackathon? Anyway, I could soon be up to my ears in it. A pubmed search on Health Hack...

Saturday, February 11, 2017

Pushing Drugs - American Style: Watching the news makes people sick.

This is a post from the blog of Professor emeritus Dr. Richard Hayes, who taught Buddhism and Sanskrit at McGill University, and is now back home in the 4 corners area of the United States. Dayamati Hayes is also a Quaker, peace activist, vegan, and a conscientious objector from the Viet Nam war. As a friend who I have known on internet lists and now on social media for more than several decades, Dr. Hayes is well known and respected for his wit, wisdom, and insight into our human condition. In fact, there are too many excellent posts one could share from Richard, but this one is only a sample, and one that has some relevance to digital health:

http://dayamati.blogspot.ca/2017/02/pushing-drugs-american-style.html

Watching the news makes people sick

At the outset I must confess to being addicted to watching the news on television. Although my favorite televised news sources are on PBS, on most nights I supplement the PBS News Hour with the news on one of the traditional network stations or a cable news channel. Something that has repeatedly struck me in watching the evening news on traditional network stations is that advertisers have obviously learned that the vast majority of people who watch the evening news are suffering from indigestion, irritable bowel syndrome, erectile dysfunction, atrial fibrillation not caused by a heart-valve problem, moderate to severe psoriasis, rheumatoid arthritis, osteoporosis, depression, insomnia, restless leg syndrome or dry eye disease. If not afflicted by one of those conditions, they are being assaulted by meatballs or chicken wings.

Not all the commercials are pushing drugs, of course. Interspersed with all the pharmaceutical products are commercials featuring lawyers who are prepared to sue pharmaceutical companies for offering products that have life-changing side effects, and health insurance plans that complement Medicare to provide coverage to pay for all those pharmaceuticals that TV viewers are urged to ask their doctors about. Given the evidence of television commercials, remarkably few of the people who watch the televised news are under the age of sixty-five and have sound minds in sound bodies.
An often-heard claim of those who are convinced that the Affordable Care and Patient Protection Act has all but destroyed the health-care system in the United States is that the ACA (which they persist in calling Obamacare) has driven insurance premiums through the ceiling, thus bringing financial ruin to small businesses and confronting hard-working Americans with having to choose between health insurance and sending their children to overpriced universities. What is missed in this analysis, of course, is that health insurance is expensive because medical care and pharmaceuticals are expensive. Also left out of consideration is that almost every pharmaceutical product sold in the United States is available in Canada for a fraction of the cost.

Why don’t Canadians pay their share of the cost of drugs?

A claim I have heard many American make, clearly a claim that they have learned from the pharmaceutical companies themselves, is that the prices of pharmaceutical products are so high in the United States because it costs pharmaceutical companies a great deal of money to do the research necessary to develop new products. Some American friends have even showed indignation that Americans are subsidizing Canadians, who derive all the benefits of expensive medical research but pay none of the cost. Once, when I was still living in Canada, I received an email from a (former) friend in the United States who accused me, in language unsuitable for anyone not in either the navy or a motorcycle gang, of being a freeloader who was enjoying good health at the expense of poor Americans. That claim was false for two reasons. First, I have almost never been prescribed a pharmaceutical product and tend to avoid over-the-counter medical products. Second, there are better explanations for why pharmaceutical prices are outrageously high in the United States. So the answer to the question “Why don’t Canadians pay their share of the cost of drugs?” is that they in fact do pay their fair share. Americans pay more, not because they are subsidizing freeloading Canadians, but because Americans pay far more for products than it costs to develop and manufacture those products.

Why do Americans pay for overpriced pharmaceuticals?

The pharmaceutical companies typically claim that they must charge high prices for their products because of the high cost of developing them. It cannot be denied that running controlled tests on new products and making sure the products meet safety standards is costly. It should also be pointed out, however, that advertising the products once they are developed is also costly. To that can be added that pharmaceutical companies also tend to pay shareholders rather high dividends. When health care products are manufactured by for-profit corporations that have investors to reward with high dividends, then costs naturally rise. While the claim of many advocates of free-market capitalism is that competition keeps costs down, the opposite is often the case. If two companies are competing for a share of the market, the cost of the competition—the advertising of products to potential consumers of the products and to potential prescribers of those products—can be quite high.

Neither of those kinds of advertising is necessary. There is no justification whatsoever for running expensive advertisements on television that end with the line “As you doctor whether…is right for you.” There is no need to make the patient into a sales representative for a product that the patient may end up buying. If someone has, say, osteoporosis, then it should be sufficient for the physician to suggest a range of possible treatments, and to tell the patient the desired effects and the likely side effects of each of the possible treatments. And that information should be given directly to the physician in the form of the results of clinical trials, not in the form of slick presentations delivered in the context of work-vacations at expensive resorts. The cost of disseminating objective information is relatively low, whereas the cost of trying to persuade a physician to prescribe product A rather than the almost-identical product B is much higher.

One way to bring medical costs down is to make advertising of medical products illegal, as it is in some countries that have lower costs for pharmaceuticals and hands-on medical care. Another way is to have government-imposed limits on the amount of profit a company can make on a product, as is also the case in some countries that have reasonable consumer-costs for health-related products. A third way is to have a government-run insurance plan that negotiates prices with pharmaceutical companies and imposes a cap on how much a pharmaceutical company can receive for its products. There is no need for a government-run plan to be managed by the central government. In Canada each province has its own plan, and no two provinces have exactly the same setup.

Health care is far too important to be left to the vagaries of markets in a for-profit corporate scheme. The good health of the entire citizenry is far more important than the bank accounts of capitalist shareholders. There are plenty of other markets in which investors can make or lose their money. Pharmaceutical companies, manufacturers of medical devices, clinics, hospitals and retirement homes for the elderly should not be in the private investment sector of the economy. (Neither should correctional facilities, but that is a matter for another day.)

Americans desiring affordable health insurance should first advocate for more affordable treatments, and that is best achieved by a not-for-profit health-care system. They should be asking for, in fact demanding, more government involvement and less private-sector investment in products designed for health. Such a change in outlook would, however, require that Americans first seek a cure for their addiction to free-market capitalism and the delusion that the best way to keep costs down is to let the market determine prices. That strategy has been tried again and again, and it has failed again and again. It is time for Americans to considered an alternative system (not to be confused with “alternative facts”).

Next time you see a television commercial for an expensive treatment that you have seen a hundred times before, instead of simply reaching for the mute button on the remote control, ask your doctor whether socialized medicine is right for you. If you doctor says No, then consider seeking a second opinion. 

Wednesday, February 8, 2017

COACH is recuiting health informatics student for MacKenzie Health Epic HIS



FOR IMMEDIATE RELEASE: COACH supports Mackenzie Health with large-scale digital health/health informatics undergraduate and post-graduate students recruitment initiative

Toronto, ON - February 8, 2017 - Today COACH: Canada's Health Informatics Association announced the roll-out of a major recruitment initiative with the goal of hiring more than 75 emerging professionals/ students/ HI graduates to fill full-time contract, co-op, and summer positions at Mackenzie Health, the regional health service provider for Ontario's Southwest York Region.

Mackenzie Health has embarked on a full implementation of the Epic hospital information system (HIS) as part of its drive to achieve Level 7 in the HIMSS EMRAM scale within three years. The hospital also has plans to open a second major site in 2020, and will become the first hospital in Canada to implement the full suite of Epic systems.

To facilitate adoption of the new HIS, the hospital will need 75 Super Users and 15 Credentialed Trainers to be drawn largely from COACH membership and academic contacts.

"We need to cast a wide net, and quickly," said Diane Salois-Swallow, chief information officer at Mackenzie Health. "The COACH membership community is simply the best place to find this many applicants with a basic understand of HIS implementation complexities."

75 Super Users/15 Credentialed Trainers
Super Users will provide direct end-user support and assistance during implementation training sessions. Customer service skills and knowledge of the new system will ensure hospital users are comfortable during the go-live process. The Super User position is a paid position, and is defined as a placement or summer term opportunity starting May 1, 2017 and ending August 21, 2017. For more information about the Super User role, visit http://bit.ly/2kF4AWK.

Credentialed Trainers will train end-users (using existing training materials) and provide go-live support. Credentialed Trainers will be required to commit to a longer term, from March 30, 2017 until August 21, 2017. This is a paid position. For more information about the Credentialed Trainer role, visit http://bit.ly/2kiRy00.

"We are happy to be supporting Mackenzie Health in this important initiative," said Mark Casselman, COACH CEO. "This benefits everyone. The hospital benefits by being able to tap into a group of engaged, motivated young digital health professionals, trained for Canadian HIS delivery. And our Academic and Student Members will have the opportunity to put their education to practical use in a major health service delivery transformation. COACH is growing, and this is the first in a wave of new partnerships that will connect, inspire, and educate the digital health professionals who are contributing to the future of healthcare in Canada."
Applicants who require training in HIS delivery best-practices will participate in a COACH education session. Funding partners interested in reaching and investing in the next generation of the Canadian digital health workforce are welcome to participate in this education initiative.

About COACH
COACH: Canada's Health Informatics Association has a history of fostering professionalism and refining the expertise of its 2,000+ member population, with an emphasis on continuing education and shared knowledge. COACH is Canada's largest digital health community, representing professionals working to advance healthcare delivery through information technology. As the voice of Health Informatics (HI) In Canada, COACH promotes the adoption, practice and professionalism of HI. HI is at the intersection of clinical practice, Information Management/Information Technology and healthcare management. Visit www.coachorg.com for more information.

CONTACT
Mark Casselman at 416.358.0567 or ceo@coachorg.com

Friday, January 13, 2017

Brilliant Article by Susan Schneider - It may not feel like anything to be an alien

http://www.kurzweilai.net/it-may-not-feel-like-anything-to-be-an-alien

This was one of most well written and interesting articles I read all year. You don't necessarily need to have seen the movie Arrival to appreciate it:

It may not feel like anything to be an alien

December 23, 2016
An alien message in Arrival movie (dredit: Paramount Pictures)
By Susan Schneider
Humans are probably not the greatest intelligences in the universe. Earth is a relatively young planet and the oldest civilizations could be billions of years older than us. But even on Earth, Homo sapiens may not be the most intelligent species for that much longer.
The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.

What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years.

I prefer the term “postbiological” to “artificial” because the contrast between biological and synthetic is not very sharp. Consider a biological mind that achieves superintelligence through purely biological enhancements, such as nanotechnologically enhanced neural minicolumns. This creature would be postbiological, although perhaps many wouldn’t call it an “AI.” Or consider a computronium that is built out of purely biological materials, like the Cylon Raider in the reimagined Battlestar Galactica TV series.

The key point is that there is no reason to expect humans to be the highest form of intelligence there is. Our brains evolved for specific environments and are greatly constrained by chemistry and historical contingencies. But technology has opened up a vast design space, offering new materials and modes of operation, as well as new ways to explore that space at a rate much faster than traditional biological evolution. And I think we already see reasons why synthetic intelligence will outperform us.

An extraterrestrial AI could have goals that conflict with those of biological life
Silicon microchips already seem to be a better medium for information processing than groups of neurons. Neurons reach a peak speed of about 200 hertz, compared to gigahertz for the transistors in current microprocessors. Although the human brain is still far more intelligent than a computer, machines have almost unlimited room for improvement. It may not be long before they can be engineered to match or even exceed the intelligence of the human brain through reverse-engineering the brain and improving upon its algorithms, or through some combination of reverse engineering and judicious algorithms that aren’t based on the workings of the human brain.

In addition, an AI can be downloaded to multiple locations at once, is easily backed up and modified, and can survive under conditions that biological life has trouble with, including interstellar travel. Our measly brains are limited by cranial volume and metabolism; superintelligent AI, in stark contrast, could extend its reach across the Internet and even set up a Galaxy-wide computronium, utilizing all the matter within our galaxy to maximize computations. There is simply no contest. Superintelligent AI would be far more durable than us.

Suppose I am right. Suppose that intelligent life out there is postbiological. What should we make of this? Here, current debates over AI on Earth are telling. Two of the main points of contention—the so-called control problem and the nature of subjective experience—affect our understanding of what other alien civilizations may be like, and what they may do to us when we finally meet.

Ray Kurzweil takes an optimistic view of the postbiological phase of evolution, suggesting that humanity will merge with machines, reaching a magnificent technotopia. But Stephen Hawking, Bill Gates, Elon Musk, and others have expressed the concern that humans could lose control of superintelligent AI, as it can rewrite its own programming and outthink any control measures that we build in. This has been called the “control problem”—the problem of how we can control an AI that is both inscrutable and vastly intellectually superior to us.
“I’m sorry, Dave” — HAL in 2001: A Space Odyssey. If you think intelligent machines are dangerous, imagine what intelligent extraterrestrial machines could do. (credit: YouTube/Warner Bros.)

Superintelligent AI could be developed during a technological singularity, an abrupt transition when ever-more-rapid technological advances—especially an intelligence explosion—slip beyond our ability to predict or understand. But even if such an intelligence arises in less dramatic fashion, there may be no way for us to predict or control its goals. Even if we could decide on what moral principles to build into our machines, moral programming is difficult to specify in a foolproof way, and such programming could be rewritten by a superintelligence in any case. A clever machine could bypass safeguards, such as kill switches, and could potentially pose an existential threat to biological life. Millions of dollars are pouring into organizations devoted to AI safety. Some of the finest minds in computer science are working on this problem. They will hopefully create safe systems, but many worry that the control problem is insurmountable.

In light of this, contact with an alien intelligence may be even more dangerous than we think. Biological aliens might well be hostile, but an extraterrestrial AI could pose an even greater risk. It may have goals that conflict with those of biological life, have at its disposal vastly superior intellectual abilities, and be far more durable than biological life.

That argues for caution with so-called Active SETI, in which we do not just passively listen for signals from other civilizations, but deliberately advertise our presence. In the most famous example, in 1974 Frank Drake and Carl Sagan used the giant dish-telescope in Arecibo, Puerto Rico, to send a message to a star cluster. Advocates of Active SETI hold that, instead of just passively listening for signs of extraterrestrial intelligence, we should be using our most powerful radio transmitters, such as Arecibo, to send messages in the direction of the stars that are nearest to Earth.

Why would nonconscious machines have the same value we place on biological intelligence?
Such a program strikes me as reckless when one considers the control problem. Although a truly advanced civilization would likely have no interest in us, even one hostile civilization among millions could be catastrophic. Until we have reached the point at which we can be confident that superintelligent AI does not pose a threat to us, we should not call attention to ourselves. Advocates of Active SETI point out that our radar and radio signals are already detectable, but these signals are fairly weak and quickly blend with natural galactic noise. We would be playing with fire if we transmitted stronger signals that were intended to be heard.

The safest mindset is intellectual humility. Indeed, barring blaringly obvious scenarios in which alien ships hover over Earth, as in the recent film ArrivalI wonder if we could even recognize the technological markers of a truly advanced superintelligence. Some scientists project that superintelligent AIs could feed off black holes or create Dyson Spheres, megastructures that harnesses the energy of entire stars. But these are just speculations from the vantage point of our current technology; it’s simply the height of hubris to claim that we can foresee the computational abilities and energy needs of a civilization millions or even billions of years ahead of our own.

Some of the first superintelligent AIs could have cognitive systems that are roughly modeled after biological brains—the way, for instance, that deep learning systems are roughly modeled on the brain’s neural networks. Their computational structure might be comprehensible to us, at least in rough outlines. They may even retain goals that biological beings have, such as reproduction and survival.
But superintelligent AIs, being self-improving, could quickly morph into an unrecognizable form. Perhaps some will opt to retain cognitive features that are similar to those of the species they were originally modeled after, placing a design ceiling on their own cognitive architecture. Who knows? But without a ceiling, an alien superintelligence could quickly outpace our ability to make sense of its actions, or even look for it. Perhaps it would even blend in with natural features of the universe; perhaps it is in dark matter itself, as Caleb Scharf recently speculated.
The Arecibo message was broadcast into space a single time, for 3 minutes, in November 1974 (credit: SETI Institute)

An advocate of Active SETI will point out that this is precisely why we should send signals into space—let them find us, and let them design means of contact they judge to be tangible to an intellectually inferior species like us. While I agree this is a reason to consider Active SETI, the possibility of encountering a dangerous superintelligence outweighs it. For all we know, malicious superintelligences could infect planetary AI systems with viruses, and wise civilizations build cloaking devices. We humans may need to reach our own singularity before embarking upon Active SETI. Our own superintelligent AIs will be able to inform us of the prospects for galactic AI safety and how we would go about recognizing signs of superintelligence elsewhere in the universe. It takes one to know one.

It is natural to wonder whether all this means that humans should avoid developing sophisticated AI for space exploration; after all, recall the iconic HAL in 2001: A Space Odyssey. Considering a future ban on AI in space would be premature, I believe. By the time humanity is able to investigate the universe with its own AIs, we humans will likely have reached a tipping point. We will have either already lost control of AI—in which case space projects initiated by humans will not even happen—or achieved a firmer grip on AI safety. Time will tell.

Raw intelligence is not the only issue to worry about. Normally, we expect that if we encountered advanced alien intelligence, we would likely encounter creatures with very different biologies, but they would still have minds like ours in an important sense—there would be something it is like, from the inside, to be them. Consider that every moment of your waking life, and whenever you are dreaming, it feels like something to be you. When you see the warm hues of a sunrise, or smell the aroma of freshly baked bread, you are having conscious experience. Likewise, there is also something that it is like to be an alien—or so we commonly assume. That assumption needs to be questioned though. Would superintelligent AIs even have conscious experience and, if they did, could we tell? And how would their inner lives, or lack thereof, impact us?

The question of whether AIs have an inner life is key to how we value their existence. Consciousness is the philosophical cornerstone of our moral systems, being key to our judgment of whether someone or something is a self or person rather than a mere automaton. And conversely, whether they are conscious may also be key to how they value us. The value an AI places on us may well hinge on whether it has an inner life; using its own subjective experience as a springboard, it could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of other species, we value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from munching on an apple.

But how can beings with vast intellectual differences and that are made of different substrates recognize consciousness in each other? Philosophers on Earth have pondered whether consciousness is limited to biological phenomena. Superintelligent AI, should it ever wax philosophical, could similarly pose a “problem of biological consciousness” about us, asking whether we have the right stuff for experience.

Who knows what intellectual path a superintelligence would take to tell whether we are conscious. But for our part, how can we humans tell whether an AI is conscious? Unfortunately, this will be difficult. Right now, you can tell you are having experience, as it feels like something to be you. You are your own paradigm case of conscious experience. And you believe that other people and certain nonhuman animals are likely conscious, for they are neurophysiologically similar to you. But how are you supposed to tell whether something made of a different substrate can have experience?
Westworld (credit: HBO)
Consider, for instance, a silicon-based superintelligence. Although both silicon microchips and neural minicolumns process information, for all we now know they could differ molecularly in ways that impact consciousness. After all, we suspect that carbon is chemically more suitable to complex life than silicon is. If the chemical differences between silicon and carbon impact something as important as life itself, we should not rule out the possibility that the chemical differences also impact other key functions, such as whether silicon gives rise to consciousness.

The conditions required for consciousness are actively debated by AI researchers, neuroscientists, and philosophers of mind. Resolving them might require an empirical approach that is informed by philosophy—a means of determining, on a case-by-case basis, whether an information-processing system supports consciousness, and under what conditions.

Here’s a suggestion, a way we can at least enhance our understanding of whether silicon supports consciousness. Silicon-based brain chips are already under development as a treatment for various memory-related conditions, such as Alzheimer’s and post-traumatic stress disorder. If, at some point, chips are used in areas of the brain responsible for conscious functions, such as attention and working memory, we could begin to understand whether silicon is a substrate for consciousness. We might find that replacing a brain region with a chip causes a loss of certain experience, like the episodes that Oliver Sacks wrote about. Chip engineers could then try a different, non-neural, substrate, but they may eventually find that the only “chip” that works is one that is engineered from biological neurons. This procedure would serve as a means of determining whether artificial systems can be conscious, at least when they are placed in a larger system that we already believe is conscious.

Even if silicon can give rise to consciousness, it might do so only in very specific circumstances; the properties that give rise to sophisticated information processing (and which AI developers care about) may not be the same properties that yield consciousness. Consciousness may require consciousness engineering—a deliberate engineering effort to put consciousness in machines.

Here’s my worry. Who, on Earth or on distant planets, would aim to engineer consciousness into AI systems themselves? Indeed, when I think of existing AI programs on Earth, I can see certain reasons why AI engineers might actively avoid creating conscious machines.
Robots are currently being designed to take care of the elderly in Japan, clean up nuclear reactors, and fight our wars. Naturally, the question has arisen: Is it ethical to use robots for such tasks if they turn out to be conscious? How would that differ from breeding humans for these tasks? If I were an AI director at Google or Facebook, thinking of future projects, I wouldn’t want the ethical muddle of inadvertently designing a sentient system. Developing a system that turns out to be sentient could lead to accusations of robot slavery and other public-relations nightmares, and it could even lead to a ban on the use of AI technology in the very areas the AI was designed to be used in. A natural response to this is to seek architectures and substrates in which robots are not conscious.

Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing. The simple consideration of efficiency suggests, depressingly, that the most intelligent systems will not be conscious. On cosmological scales, consciousness may be a blip, a momentary flowering of experience before the universe reverts to mindlessness.

If people suspect that AI isn’t conscious, they will likely view the suggestion that intelligence tends to become postbiological with dismay. And it heightens our existential worries. Why would nonconscious machines have the same value we place on biological intelligence, which is conscious?

Soon, humans will no longer be the measure of intelligence on Earth. And perhaps already, elsewhere in the cosmos, superintelligent AI, not biological life, has reached the highest intellectual plateaus. But perhaps biological life is distinctive in another significant respect—conscious experience. For all we know, sentient AI will require a deliberate engineering effort by a benevolent species, seeking to create machines that feel. Perhaps a benevolent species will see fit to create their own AI mindchildren. Or perhaps future humans will engage in some consciousness engineering, and send sentience to the stars.

SUSAN SCHNEIDER is an associate professor of philosophy and cognitive science at the University of Connecticut and an affiliated faculty member at the Institute for Advanced Study, the Center of Theological Inquiry, YHouse, and the Ethics and Technology Group at Yale’s Interdisciplinary Center of Bioethics. She has written several books, including Science Fiction and Philosophy: From Time Travel to Superintelligence. For more information, visit SchneiderWebsite.com.

Thursday, December 15, 2016

Anyone read the Clark report on eHealth Ontario?



Clark report recognizes eHealth Ontario – and ehealth in Ontario

The recently conducted Ed Clark review concludes that eHealth Ontario and its partners have created clear and compelling value for the health care system and recognizes the progress that’s been made.
In his report, Mr. Clark makes a number of recommendations to maximize the value of current assets, derive more value for the system and patients alike, and improve the delivery and oversight of the digitization of health information in the province.
While some of these recommendations apply solely to eHealth Ontario delivering its future mandate, many are aimed at the broader health care sector that is involved in digitizing health care across the province.


Wednesday, November 23, 2016

eHealth Medical Fiction - "Cell" by Robin Cook

I just finished a page turner by Robin Cook called "Cell". I knew from the beginning that it was an eHealth type of medical fiction. It features a smartphone app called iDoc that promises almost to take over the role of the personal physician. I suspected while reading the influence by Eric Topol, who must be one of the greatest champions for spearheading the medical smartphone revolution.

I was not too surprised to find that Robin Cook does acknowledge Topol at the end of the book. For a while I was concurrently reading Topols' "The Patient Will See You Now" and "Cell".  Robin Cook wrote "Cell" in 2014 and he credited Topols' "The Creative Destruction of Medicine".  Reading the medical fiction is  just a diversion. If you really want to learn about how the smartphone will revolutionize medicine - read Topol.

Medicare should be a major department for all Americans, just like Education and Defense. The author appears to argue like this in the book as he alludes often to the Affordable Care Act and Obamacare. The villain is a Health Insurance Company bent on making billions with the miracle app. The iDoc app is wonderful as the algorithms on the smartphone help to prevent illness and conditions. Health advice is immediate and always accessible.

Unfortunately, the app takes a turn for the worse and the "heuristics" start killing off patients in the alpha testing.  That involves what I think is the only science fiction element in the story - a nano-chip implanted in diabetes patients that is remote controlled by wireless radio signals releasing doses of insulin.  In real life the FDA has approved an "artificial pancreas" of sorts - a network of devices - that automagically monitors and controls blood sugar levels - it just doesn't work on the nano scale.

Just saw over at the Geek Doctor blogspot there is a guest blog by Seth Berkowitz, MD about  Apple’s CareKit and ResearchKit frameworks and the HealthKit API being used at BIDMC. Engaging patients in their health like that is a step towards a kind of iDoc.
  


Sunday, November 13, 2016

Musing on the Interaxon Muse Meditation Headband

"For this calibration, find a comfortable position and take a deep breath".

The computer brain interface world is getting interesting. The first time I heard about these types of MUSE brainwave sensing devices was an experiment where they trained people to move a cursor on a computer screen using their brain waves and a EEG headband. Maybe it was the MUSE - not sure. The next thing they did was have those same people change the colour of the floodlights on Niagara Falls and the CN Tower using their entrained brainwaves.

 I have seen more than several research projects now that have involved the Interaxon Muse headband - a device that self-directs users into a calm state of meditation by reading their brainwaves through an EEG headband and translating the data into a meditation tracking app. It may be just the start before EEG caps and gels and wire attachments are a thing of the past.

The McMaster university library recently started loaning out this device so instead of buying one (about $400) I have borrowed one for a week. Mind you, I have 35 years of meditation experience in a variety of schools and techniques and am not expecting a device like this to teach me anything. But after taking an 8 week online mindfulness course - just videos and online instructions - I believe that meditation can be taught through technology.

After downloading the app and fumbling around trying to fit it on my head - should have looked at the visuals in the instructions -  I learned how to sync my brainwaves using the app on the ipad. I tried a 3 minute meditation in the living room while the TV was on, a laptop was playing a video in the background and I was talking to my wife who was doing her yoga exercises. My brainwaves during those 3 minutes were in the noisy/active category. I had scored no calm points and I heard zero "birds". Hearing birds means that your brainwaves are staying in a calm meditative space. Seeing a graph of my brainwaves is actually very interesting but scoring points for meditating well and being asked if I want to share that on Facebook or Twitter is another thing. Tempting though to show all my friends on social media what a noisy mess my brainwaves are - No!

I was sort of impressed with the app interface and the instructions by the MUSE meditation guide. The next time I tried it I sat in my meditation room on my meditation cushion and zabuton. I extended the time to 7 minutes. I chose the default beach imagery with the sound of lapping waves and wind. If you hear the wind, it is actually the sound of your own brainwaves making noise. You are not watching your breath. I sat in the half lotus posture with my hands in my lap, a classic meditation posture I have practiced for years. The resulting graph of my brainwaves after 7 minutes indicated that I had no active or noisy points - 98% calm state of mind and about 100 birds. I could actually hear the birds in the background if I turned up the volume.  Here is a picture of my stats. In my last 20 minute sessions the batteries in the MUSE drained and I had to resume twice so the stats are all thrown off.

It is getting interesting but I spent the rest of the day thinking that I have been under surveillance with my brainwaves subjected to mechanical replication and analysis. This experience was not at all a natural process, in spite of the kind and soft voice of the human guide behind the algorithms on the app. My gurus had years and years of training and practice in meditation before they were allowed to teach.  I didn't let that get to me because I am fascinated with the technology.

The next sitting session I tried 20 minutes - about the amount of my usual meditation time these days. The result was 100% in the calm space, over 200 birds, and no "recoveries" or straying outside the calm zone with distracted thought or lapse of attention to mindfulness of breathing. And that was just a "normal" session for me.

I am really impressed with this device but I am sure that I don't need it having learned the art and science of meditation the traditional way - sitting at the feet of the masters, going on retreats, and practicing daily. My real question and concern is how will this device work with digital natives and those new to meditation?

We live in a world of secular ethics and this device does not come attached to any religious ideology. We all know by now that a mindfulness of breathing practice cuts across the sectarian world. Creating calm brain waves just requires the right guidance and intervention. Is total reliance on the MUSE soulless and alienating?  Not necessarily, though I would probably recommend an online mindfulness of meditation course called Palouse Mindfulness rather than the MUSE for a true beginner - especially ones who are remote from teachers and centres and can't afford the cost. One of the practices in one of the major schools of Tibetan Buddhism is Lam Rim. Lam Rim literally means "gradual path". The gradual path to meditative calm is the best way.


Here is one tip from my Zen teacher on meditation that will help anyone understand the nature of mind and meditation. Sitting across from me at a table the teacher gave me a piece of paper and a pencil. He asked me to draw a small line to count each time I had a thought. It became obvious to me that the page would quickly fill up with counts of scattered thoughts. After sitting in meditation practice, the number of counts becomes noticeably fewer. Where did all those thoughts go? It is just a state of being.