A few days ago, LinkedIn (LI) sent me an email saying that “because you recently viewed” a particular position listed in their jobs section, “you might also like” a list of 25 others. I didn’t. But I do see this automatically generated mailing – however well intended – as part of a larger trend that makes me uncomfortable.
It is true that I casually looked at a position listing from an organization I know, but not one in an area where I have any particular expertise, nor one that especially interested me. I hadn’t thought I’d be tracked in this way, and wonder why LI’s job algorithm seized on this particular act of browsing, and how it came up with the eclectic list it did based on such slim information (though presumably it sourced something more in my profile and activity?).
This is not an isolated instance. I checked out some job sites not long ago, and based on my limited use of each I now receive by email, batches of position listings somehow or somewhat related, or clearly unrelated, to what I happened to look at on each. Since I’m not feeding those algorithms, they continue on separate tracks, churning out lists of positions in a way that is almost surreal (and would be a significant time commitment to follow up on, given the numbers involved). But they each began with a particular interpretation of my clicks on their sites.
It’s not so much the privacy issues I’m worried about here, as these are sites I chose to use and I’m benefiting from their products for free. It’s the assumptions, implications, and opportunity costs of a system developing in this way. There are evident limits to what these algorithms can currently do, though you can be sure that they will be improved on, and will eventually be sharing your information (in the way one internet search you make will show up as ads in other websites you visit).
What really concerns me are two aspects of this use of technology:
- The idea that other people’s algorithms – about which the rest of us know little – are recording and analyzing our actions on the premise that they can (and perhaps are better placed to) make optimal choices for us. I see this as the latest reflection of a pattern in software development going back some years, in which the object has been to write programs that do more for users, anticipating our needs and interpreting our intent (based apparently on statistics and probability). Nothing necessarily nefarious in that, but I’ve found it annoying and sometimes frustrating (e.g., a long litany of exasperating run-ins with Google searches that I won’t enumerate). More problematic is that it’s arguably a mindset or cultural bias in software development that in effect takes initiative from individuals even while ostensibly trying to benefit them.
- The possibility that development of algorithms and intelligent programs designed to do significant parts of our job search on our behalf will preclude development of intelligent programs that work at the behest of job seekers. It’s a path dependency issue, with the algorithms we see today reflecting the priorities in thinking and development being done for social networks like LinkedIn, job sites, and organizations doing hiring. The more these become embedded in the evolving job market, the more difficult it will be to explore alternatives which give more control to the diverse actors in it – especially to job seekers, who are otherwise just data to other people’s algorithms.
What alternative visions are there to centralized entities in the job market using algorithms and intelligent programs to analyze us as data, make recommendations, and ultimately hiring decisions? How to develop intelligent programs that work for the individual job seeker, which are not beholden to any professional network or job site?
This post was originally published on LinkedIn on 3 July 2017.