Tuesday, May 22, 2018

10 reasons why GDPR is a Kafkaesque mess….

I use the word ‘Kafkaesque’ deliberately, as in Kafka’s novel The Castle, the unknown authorities engage K in a never-ending, futile, bureaucratic process where he never really knows why he is being asked to do anything or by whom. So it is with GDPR. Organisations are sending out zillions of emails, just because they think someone told them to. Millions are receiving these emails, many unnecessary. Some may even be illegal breaking the very rules they are meant to be enforcing. Ironically GDPR phishing scams are already hitting our mailboxes, which spread malware or steal personal data.
GDPR is an EU regulation ((EU) 2016/679) and one of the worst regulations The EU Castle has ever come up with. In a typical top-down, centralised fashion, shaped by lobbyists and not common-sense, the EU has managed to turn what should have been a practical, workable idea into a bureaucratic nightmare.
1. Ambitious
The New York Times interviewed a range of data experts and found that even these experts found the regulations incomprehensible. It’s a massive tangle of badly worded regulation, completely over-engineered. The consequences for organisations of all sizes are horrific. More compulsory processes, more mandatory documentation, in some cases Data Protection Officers (DPO) and, of course, a slew of useless courses. It is the blind leading the blind.
2. Ambiguous
Worse still many experts warned that it was flawed. Its ambiguities are already being exposed. The badly written Eurospeak regulations are typically vague, written by people who have given little thought to its implementation; "undue delay", “likelihood of (high) risk to rights and freedoms" and "disproportionate effort" are just a few examples of the vagueness. This is a boon for lawyers and the vagueness will be played out in an ever-increasing Kafkaesque game played for years through the European Courts. Kafka’s The Trial will be the manual for this particular charade.
4. Myopic
Rather than work back from what is actually needed, based on user needs, actual structures and practical measures, they’ve gone for blanket fixes based on old assumptions. These are laws written by people who don’t really understand what data is, how it is stored and its use in leading edge technology. They see data as being stored like furniture in a storage facility. They ask for clear specifications on use, insensitive to how it is used in machine learning and more contemporary forms of AI, where the outputs may not be clear. We saw this gulf when Zuckerberg was interviewed by US Senators. This time, the gulf is written into bad law.
5. Massive hit for organisations
Organisations have to see this as a ‘project’ using real staff to create milestones for oodles of documentation and process that will not only incur a large initial cost but also on-going costs. Many people, who wouldn’t know a database if it were in their soup will become Data Tsars. Many organisations will not have data management clauses in contracts with subcontractors. This is a big problem. Expect some wildfires here. This is all real time and real money.
6. Small companies will suffer
The big boys will be fine. They have the resources to handle this hammer blow but small businesses will not. It will break many on the back of increased costs and fear of illegality. In a laughable exception the EU decided to exempt small businesses from having to hire a Data Protection Officer – really!
7. Hits on revenue
One unexpected consequence is the hit on revenues for charities who may not get reconsent replies. This may apply to all sorts of businesses, an unforeseen consequence of an ill-defined regulation. The effects on revenues have, I suspect been underestimated.
8. Users flooded
On the client side, users are receiving a ton of emails, most of which are being ignored, not because people are indifferent, but because they don’t have the time or inclination to respond. Rather than focussing on reconsent, the legislation would have been better formed if it simply informed existing users. Many organisations are being panicked into demanding actual consent when it is not necessary.
9. Fines
Fines of up to EUR 20,000,000 or 4% of the total worldwide turnover are payable (whichever is bigger), yet it is not clear how lenient or harsh they will be. Organisations are petrified and don’t really know how to quantify the risks. I can understand using this level of threat with the big boys, who will have the best of lawyers but what about the little companies who will read this stuff and have to live with the risks. The truth is, they really don’t actually know what it means and how to eliminate the risks.
10. Unforeseen consequences
It all comes into force on 25 May 2018. Of course, many are unprepared, many lack the resources to do what is demanded of them and some will suffer – badly. The suffering will be extra costs, lost revenues, lost opportunities and possibly going under. It should never have been like this. I can also see small-scale data theft as a tactic to put competitors out of business, as the reporting rules are draconian. I can see companies lose revenues through consent failure by lazy users. I can see a lot of problems here.
Conclusion

Everyone agrees that we need some consumer protection. You need a visible opt-in box, if I unsubscribe I want to know that you’ve done it. I don’t want you misusing my data. But that’s not what this ended up being. It’s ended up as a mess. Rather than KISS (Keep It Simple Stupid) they’ve gone for KICK (Keep it Complicated and Kafkaesque). Kafka died before he could finish The Castle – and many will certainly lose the will to live or be beaten into submission as this stupid piece of regulation exhausts us with it’s bureaucratic blunt-force.

 Subscribe to RSS

Monday, May 21, 2018

Bullshit Jobs - how capitalism has replaced real jobs with BS jobs....

Bullshit Jobs by David Graeber, an LSE anthropologist and the guy who gave us the phrase “the 99%”, surfaces something we’ve all come across. As jobs have shifted into services, no end of cosy jobs have been created for middle-class workers. Looking for an easy life. Graeber argues that these jobs are simply ends-in-themselves and hangs several species out to dry. Few would disagree.
His book explores the Kafkaesque world of bureaucracy but there’s an increasing list of jobs for people who pretend to do things in an organisation, which the organisation is not actually doing. Compliance checking and training is probably top of the list. Diversity training has been proven in large-scale studies to be ineffective, sometimes with a negative effect. Yet tens of thousands are still employed in this fatuous activity. GDPR is yet another manifestation of this. Organisations overreact and build structures, systems and jobs to deal with barely perceptible problems. Every problem needs a ‘course’, a variation on the ‘if you walk around with a hammer in your hand, everything starts looking like a nail’.
Graeber argues that, far from freeing us up for more leisure time, technology is being used to make us comply. HR has become the part of the organisation that protects the organisation from its own employees. We have armies of people telling us not what we should be doing but what we should NOT be doing. They now see employees as having pathological weaknesses – racism, sexism, unconscious bias and wellbeing problems to be sorted by hokey ‘courses’ and doses of Mindfulness, perhaps the only solution that actually delivers the very opposite of what it promises. This idea, that all employees are psychologically flawed and biased, has become the norm. Therapy culture has invaded HR, creating tons of jobs for people who are medically, and in any other sense of the word, unqualified to solve the imaginary problems they create. It’s a vicious circle.
Charities abound with more freelance ‘admin’ people, researchers and well paid executives than clients. The Charity Commission has little handle on the cost to spend ration of most charities. Without good governance they quickly turn into job creation schemes for the CEOs friends and acquaintances. I recently had to deal with Comic Relief. Two of their senior managers asked me in for a meeting, with a specific brief. I arrived only to find that both had forgotten that the meeting had been arranged (and both had PAs!). They were apologetic but couldn’t manage their own lives never mind a large business.
In academia, the amount of 2nd and 3rd rate research has rocketed, pressed into ever more journals that fewer and fewer read. On top of this layer upon layer of academic administration jobs have been created, making Higher Education increasingly expensive. The Case for Education by the economist Bryan Caplan, explores this very issue, with detailed research showing that funding more and more higher education is wasteful, as it is largely (80% not wholly) ‘signalling’. Making young people do more and more degrees is simply credential inflation.

Having worked for over 35 years in business, you soon learn to sniff this out. When meetings have more than three people in them, there’s usually some BS work in the room. When an organisation gets bogged down in research and report writing (most quangos I know) you can read the BS that rolls off the press. The productivity puzzle is not really a puzzle. It’s clear for all to see. Technology does make us more productive but not if it’s used to create tasks jobs that do non-productive things.

 Subscribe to RSS

Friday, May 18, 2018

A/B testing shows that Pavlovian gamification does not work

A/B testing
One of the benefits of the data revolution, is that new data techniques can be used to give insights into what works and what does not work in learning. A/B testing is one such technique. It is widely used in digital marketing and something that the world’s largest tech companies routinely use – Google, Facebook, Twitter, Amazon, Netflix and so on. You try two things, wait, measure the results and choose the winner. It only works when you have large numbers of users, therefore data points. It allows for quick comparative testing and evaluation. We are now seeing this being used in education and one of the first results is surprising. 
A/B testing on lesson plans
Benjamin Jones, at Northwestern University, wanted to know what lesson plans were more successful than others, so he randomly implemented different lesson plans in a series of A/B test and waited on the results. His EDU STAR platform delivered the plans and harvested the results of short tests, to see which lesson plans got better results. One of his first A/B tests was on the teaching of fractions using gamification v non-gamification lesson plans. One group did a straight 'Diving Fractions' lesson, the other a 'Basketball Dividing Fractions' lesson. This was an exciting experiment, as many thought that gamification was literally a game changer, a technique that could significantly raise the efficacy of teaching, especially in maths. So what happened?
Ooops!
In came the results. The gamification lesson plan fared worse than non-gamified lesson plans. There are many possible reasons for this; extra cognitive-load required for the mechanics of the game, loss of focus on the actual learning, time wasted and so on. Interestingly, the kids spent more time in the gasified lesson (on average 4.5 minutes longer) but learnt less, suggesting that interest may be trumped by poorer deep processing and learning. But all we need to know at this point is that gamification fared badly when compared to more straightforward teaching methods. Interesting.
Primitively Pavlovian
There is a growing body of evidence that points towards ‘gamification’ not being the pedagogic silver bullet that many imagine. The intuitive and popular appeal of computer games, combined with overactive marketing from some vendors, may be doing more harm than good. I have pointed towards negative results in previous articles and suspect that the primitive, Pavlovian techniques commonly employed; leader boards, rewards and badges are of less use than the more deeply structural techniques, such as levels, allowing for failure and simulation techiques. Unfortunately, the Pavlovian stuff is easier to implement. This is a complex area that requires unpacking, as ‘gamification’ is a broad term, that includes many techniques. 
A word on research…
A/B testing may be the one saviour here, in that educational techniques may be individually tested, quickly and cheaply. Traditional research takes ages and is costly. Schools need to be contacted, students selected, administration completed – this all takes time – lots of time. The experiments are also often costly and time consuming, whereas randomised experiments can be quick and cheap. In particular, online learning has lots to gain. A/B testing can improve interface design and lower cognitive load but it can also quickly identify efficacious interventions. Adding the simple button ‘Learn More’ increased sign-ups to Obama’s campaign. They identified this through A/B testing.
Bibliography
Stephens-Davidowitz S (2017) Everybody Lies. P276

Jones B (2012) Harnessing technology to Improve K-12 Education. Hamilton Project Discussion Paper.

 Subscribe to RSS

Wednesday, May 09, 2018

Google just announced an AI bot that could change teaching & learning…. consequences are both exciting & terrifying…

Bot reversal
Revealed during a Google conference, it stole the show. They stunned  the audience with two telephone conversations, to real businesses, initiated and competed by a bot. If anything, the real people in the businesses sounded more confused than the bot. The bots were from Google Assistant. Note that this reverses the usual person speaks to bot. In fact, it’s hard to tell which one is real. Here, the bot is speaking to real people. We are about to see a whole range of things done by humans replaced by bots in customer service.
Lessons in learning
This reversal is interesting in education and training, as it supports the idea of a bot as a tutor, teacher, trainer or mentor. I've already written about how bots can be used in learning. The learners remain real but the teaching could be, to a degree, automated. Most of the time we talk to each other through dialogue. This is how things get done in the real world, it is also how many of us learn. Good teachers engage learners in dialogue. But suppose that bots become so good that they can perform one half of this dialogue?
This is a tough call for software. There’s the speech recognition itself. It also has to sound natural, but natural is a bit messy. I can say ‘A meal for four, at four’ – that’s tricky. On top of this, we go fast, pause, change direction, interrupt but also expect fast responses. This is what Google have tackled head-on with neural networks and trained bots.
Domain specific
Google Duplex does not pretend to understands general conversations. It is domain-specific – which is why its first deployment will be customer service over the phone. You need to train it in a specific domain, like hairdressing or doctor appointments, then encapsulate lots of tricks to make it work. But in domain specific areas, we can see how subject-specific teaching bots could do well here. Bots, on say maths or biology or language learning, are sure to benefit from this tech. There is no way the tech is anywhere near ‘replacing teachers but they can certainly augment, enhance, whatever you want to call it, the teacher’s role.
Conclusion

We’re not far off from bots like these being as common as automated check-outs and ATMs. I’ve been working on bots like these for some time and we were quick to realise that this ‘reversal’ is exactly what ‘teaching’ bots needed. There are some real issues around their use, such as our right to know that it is a bot on the other end of the line. And their use in spam calls. But if it makes our lives easier and takes the pain away from dealing with Doctor’s receptionists and call centres – that’s a win for me. If you’re interested in doing something ‘real’ with bots in corporate learning, contact me here….

 Subscribe to RSS

Sunday, April 29, 2018

Amazon’s Alexa is about to get a lot smarter – could it help teach?

The folk at Amazon have a road map for Alexa that will take it to a new level. In the long-term, this could has profound implications for learning. It’s based on these new features:
Better dialogue
Memory
Seamless skills
Personalisation
Better sustained dialogue
First up, you’ll be able to interact without first saying ‘Alexa….’ That’s great, as Alexa turns you into a didactic monster, as if you were speaking to a small child or domestic slave. It will do this by carrying over the context, so that the dialogue can continue, without having to repeat ‘Alexa’, even dialogue at a later time. This carryover feature identifies context and provides replies related to that context, which matters in learning. It will know what you’re trying to learn, as well as how well you’re doing, what you are most likely to need next and support you along the way. This will, eventually, be like having a teacher in your home.
Memory
These improvements towards better language recognition and generation, and more natural dialogue, will be welcome, but that only comes if Alexa can ‘remember’ what you both said earlier. Google Assistant already has this feature, albeit quite primitive. Note that these systems already store your shopping and ‘to do’ lists but you can also ask Google to remember where you stored your keys and so on. This data can inform future learning conversations. Alexa will know what you have learnt previously, your level of competence and can keep you in that learning zone, nothing to easy, nor too difficult. As Alexa will  ‘remember’ what you asked her, and use that information to inform future learning conversations, it truly becomes a teaching assistant. It then starts to have real teacher attributes.
Seamless skills
One problem with Alexa is the rather clumsy process of integrating new skills. This will be more seamless. Rather than having to find skills, they will be streamed into the learning process. You may need some specific tuition on a specific problem or skill, Alexa will provide that opportunity. You may need to know how to perform a specific experiment in science, piece of grammar in language learning, practice cube roots in maths, learn a poem in English…
This matters in learning, as teaching is not some general skill but lots of different integrated skills. With a range of teaching skills - providing learning opportunities, learner engagement, learner support, adaptive learning, personalisation, practice and assessment – Alexa, or something similar, may well turn out to be at first a part-competent teacher., then gain in skills.
Personalisation
Just as Google, social media, Amazon’s online services and Netflix, have sophisticated recommendation engines, so Alexa will get to know, not only you, but other learners, and all of that individual and aggregated data can be used to improve teaching and learning. Like Duolingo, it will not only know what you’ve learnt, it will know what you’re likely to have forgotten. It will also know what the strengths and weaknesses of certain pedagogic approaches will be, and correct the weaknesses. In short, it will learn to be a better teacher by measuring this in terms of the success of millions of learners.
All of this offers very specific services across the teaching and learning journey:
1. Learning opportunities
A home assistant will be able to find, even suggest, new skills and learning opportunities. It may know that you are going to Italy, so offer some tuition in basic Italian. Everything from free courses, MOOCS to microskills, could be on offer. This really could deliver the promise of lifelong learning, something that was never going to be delivered through institutions. If we are to pick up new competences in our lives, we need this type of learning to be available, on demand, cheaply, in our homes.
2. Learning engagement
Learners are lazy. We all waited until the last moment to do our homework, write essays, complete assignments. Many of us fail by simply not doing things in a timely manner. Chatbots are already being used to engage students, push reminders, offer help, even offer help on well being (see Woebot). Engagement can be personalized, nudge-like, and improve the efficacy of learning and reduce dropout. I like it when I get alerts, messages, likes on Facebook, retweets, comments on my blog – that approach should be applied to learning. Engagement cannot be left to the intermittent, erratic and formal processes of institutions, term times, teacher availability and training courses.
3. Learning support
We’ve seen how the Georgia Tech bot ‘Jill Watson’ was an effective teaching assistant, as judged by learners, who put it up for a teaching award. ‘Differ’ is already being sued in Nordic Universities. Quick, polite, constructive help and feedback is what keeps learners going. A teaching assistant, that is available 24/7/365, is precisely what is needed to combat the inefficiencies of current practice, where teaching is subject to the tyranny of limited teacher time. Learners need consistent help when ‘they’ need it, not just when the teacher is available.
4. Adaptive learning
When your own teaching assistant, in your own home, knows who you are, your age, subjects you are taking at school or college, job, competences you need at work, interests and lifelong learning needs, that will be spendid. It will adapt to your current needs and constantly be on hand to help you learn. Learning, in a sense, will become what it needs to be – invisible, simply part of your life. It will also be like a GPS system that knows when you’ve gone off course and literally gets you back on course.
5. Performance support
Most learning does not take place in schools, colleges and universities, but in the workplace. There it becomes more informal. You learn most of what you learn informally, not formally, from colleagues, doing the work and other sources, increasingly online. Imagine a service that simply delivers what you need on demand, when you need it to solve the problem at hand. The invisible LMS may be on the horizon. Chatbots, such as Otto, are already on the market.
6. Practice
How do you get to Carnegie Hall? Practice, practice, practice… Learners need to make the effort to retrieve, apply, generate, elaborate and practice what they learn. This is so easily left to chance. But home technology could allows us to do this efficiently, as part of personalised learning. We know that ‘forgetting’ is endemic in teaching and learning. We forget more of what we’re taught and learn than we ever retain. This technology can deliver, efficiently, and personally, deliberate and spaced practice that combats the forgetting curve. 
7. Assessment
Formative and summative assessment have a lot to gain from voice recognition (identification) to the practice and preparation for exams. Spaced practice and a scheduled approach to a learning journey can be known and delivered by such systems and your learning scheduled. Online exams, especially oral exams, may be delivered through such systems. 
Will take time
This will all take time, as AI is nowhere near delivering many of these skills. There are real challenges here around improvements in speech recognition (accents, background noise, false triggering), the recognition of meaning itself in the spoken word (there are many failures), dialogue management (not easy as context is complex) and personalization (data issues, relevance). But the promise is clear, as some have already been mastered, and more are in the pipeline.
Trojan horse
It may not be Alexa, or Google Home, but chatbot assistants in the home are here to stay. This technology is about to get a lot smarter. We’re getting a glimpse into a future where every home will have a teacher. Home schooling will start to develop, at first with assistance for homework, then some active learning (especially languages) then other subjects. Parents pay a ton of cash for extra home tuition, but could this eventually be available for free? Let’s suppose this is successful. Could such assistants become teachers, pushing engagement, delivering adaptive, personalized learning, sensitive to deliberate and spaced practice, with lots of retrieval, formative assessment and even exam practice. Imagine a future where you can go at your own pace in a subject, know with certainty that you’ve reached a certain level, self-assess, then simply sign up for a formal assessment. This is an interesting Trojan Horse.
Conclusion

This is the start of something interesting. I predicted some time back that Amazon may well be the company to create a Netflix of learning company and saw their progress as useful in terms of their AI.  If this roadmap works, they have a device that teaches, through dialogue, as if you had a teacher in every home. Imagine the impact in developing countries, the fact that it is cheap and can scale – scale globally.

 Subscribe to RSS

Saturday, April 21, 2018

Lords report ‘AI in the UK: ready, willing and able?’ Let’s be honest - ready – no, willing – sort of, able not really…

Politicians love a good report. Problem is, we produce them like pills, in the hope that they will make things better, when all they do is act as a placebo. It seems as though things are happening but they ain’t. Whenever we are worried by something, in this case AI, we get a bunch of people, usually well past their sell by date to produce a ‘report’. To be fair this is a substantial piece of work, at 420 numbered sections and 74 recommendations, but it’s all over the place, lacks focus and at times is way off the mark.
Ethics heavy
First, I’m not sure about a document that tries to climb and descend a mountain at the same time. No sooner has something been stated as a way forward, than it’s drowned under a wave of repetitive moralising. Although they wisely stop short at blanket regulations, it full of pious statements about dangers, challenges and ethics. As Hume said, you can’t derive an ought from an is – and that’s exactly what they do, over and over again. It is hopelessly utopian in its assumption, even that AI can be defined, never mind regulated. Perhaps too much is attributed to its efficacy and promise. In the end it’s just software.
Crass identity politics
There’s the usual obsession with identity politics and the idea that bias in algorithms will be solved as follows,  The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds. Oh dear – not that tired old idea. All this shows is that the writers of the report have succumbed to the diversity lobby or suffer from a series of human biases, starting with confirmation bias – the confirmation that diversity will solve mathematical and ethical problems. Bias is a complex set of problems in both human affairs and AI – it needs sharp analysis, not Woolworthspick and mix team building. Theres one really puzzling sentence on this that sums their naivety up perfectly.The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning. Put aside the fact that this is largely what the House of Lords does for a living, it
is not even wrong. AI has 2300 years of mathematics behind it – from the first identified algorithm in Euclids Elements, through centuries of theory in logic, probability, statistics and other areas of mathematics. AI is built on the past.
Exploiting AI
The UK has an excellent track record of academic research in the field of artificial intelligence, but there is a long-standing issue with converting such research into commercially viable products. Damn right. They’re once again pained over the age-old problem the UK has on spending oodles of public money on world-class research, which doesn’t translate into commercial success. There is the usual error of equating AI SMEs with University start-ups. Actually, many have nothing to do with Universities. We need to support SMEs with business ideas. Yet where are the people like me, who put their own money and energy into starting an AI company and invest in others? Every AI academic in the land seems to have been consulted, along with many who wouldn’t know AI of they saw it in their soup. We know that our HE system is deeply anti-corporate. To assume that research equals success is a complete non sequitur. We need to encourage innovation AND commerce around AI – not just hose yet more money into Universities.
Usual suspects
Then there’s the usual tired old suspects. First, a Global Summit. Really? Nothing like a junket to advance our AI capability. Then a code of conduct. Yet another one? Politicians do love codes of conduct. Then there is the predictable call for a quango – creatively named the AI Council. Its all so unimaginative.
AI in education
But the worst section by far is the section on EDUCATION. There is a great deal of soul searching about AI in education but only in the sense of teachers and curricula about AI. The big win here is using AI to improve and accelerate teaching and learning. This is what happens when you only talk to teachers about AI. Its all about the curriculum and nothing about actual practice. This is a massive, wasted opportunity. Im selling an AI learning company to the US as I write this. Were already losing ground. Theres something called the Hall-Presenti review – whatever that is. Ive worked in AI in learning for years, run an AI company (WildFire), have invested in AI in learning companies, speak all over the world on the topic, write constantly on the topic – yet have no idea what this is. Thats the problem – Parliament is an echo-chamber. They dont really speak to the people who DO things.
Conclusion

To be fair theres some good stuff on healthcare and a few shells over the bow for defence and autonomous weapons, but it’s a bit tired, pious and lacks punch. It will, of course, fall stillborn from the press.

 Subscribe to RSS