Grounds of discrimination


Specific ground of drug discrimination: property

Discrimination based on the type of property a person owns is contrary to the United Nations' Universal Declaration of Human Rights and the European Convention on Human Rights. Drugs are property; they may be possessed, traded or produced. Drug consumers, traders and producers are discriminated against according to the type drug property they are involved with, not according to the harmfulness of that property.

General grounds for all forms of discrimination - as applied to drugs:

Drug discrimination was originally based on the ground of race, i.e. it began as a form of racism. Immigrants introduced unfamiliar non-traditional drugs and were then selectively blamed for their drug use. Eventually a minority of the indigeneous population adopted these drugs. Finally discrimination became grounded on unfamiliarity leading to public fear leading to public unacceptability leading to illegal status which then reinforced the unfamiliarity. This process maintains and reinforces the initial differences in drug prevalence between the majority and minorities.
These grounds are (a) subjective, based on perceived risk rather than objective evidence of risk, (b) selective, based only on the attitudes of the majority while ignoring he attitudes of minorities and (c) fail to distinguish risks that are voluntary from those that are involuntary.

Perceived risk/public fear and proportionality of regulations and penalties:

- drug regulations are proportionate to subjective percieved risk NOT evidence-based objective risks as required by principles of law. Witch-hunts, sexism and racism are similarly based on regulations and penalties proportionate to percieved risk rather than objective risks.

- 'offender' is not culpable for Gov/media/public fear - e.g. Sentencing Guidelines Council: Overarching Principles: Seriousness 2004 [pdf, 62Kb]:

1.14 Some conduct is criminalised purely by reference to public feeling or social mores. In addition, public concern about the damage caused by some behaviour, both to individuals and to society as a whole, can influence public perception of the harm caused, for example, by the supply of prohibited drugs.


CONTENTS: Taking into account subjective perceived risk - Government guidance:

These quotes examine risk perception and indicate the connection between unfamiliarity, fear and unacceptability. They also indicate the difference between voluntary and involuntary risks and the need to consider the attitudes of minority stakeholders as well as the majority.

  1. Department of Health, Communicating about risks to public health 1999
  2. HM Treasury, Managing risks to the public - appraisal guidance, Appendix A: Concern Assessment Tool, 2005
  3. Government's account of how they use objective and subjective evidence in drugs policy making: Government reply to Parliamentary Science and Technology Committee’s Drug classification: making a hash of it? 2006


Department of Health, Communicating about risks to public health 1999, [pdf, 193Kb]

1.2 …public misperceptions exist: people may sometimes be most fearful of the "wrong" hazards. Even if the aim is to change such views, however, one needs to understand how they arise. More fundamentally, misperceptions do not affect only "the public". There is good evidence to suggest that everyone - public and "expert" alike - is fallible when thinking about risk, and fallible in some predictable directions....
2.1 Trust, Emotion and Openness
In most circumstances, messages are judged first and foremost not by content but by source: who is telling me this, and can I trust them? If the answer to the second question is "no", any message is liable to be disregarded, no matter how well-intentioned and well-delivered. Indeed there is even some evidence that well-presented arguments from distrusted sources actually have a negative effect - as if people conclude that the sender is not only untrustworthy but cunning....
Firstly, actions often do speak louder than words: judgements about trust will depend on what is done as well as what is said. This applies not only to the actions actually being taken to deal with a risk, but also to the manner adopted. Organisational "body language" is important: appearing to act only under pressure, for example, can be fatal.
Also important is emotional tone - again often conveyed both by words and actions.
Though trust is easily lost, building it is a long-term, cumulative process. Short of a reputation for infallibility, the single most important factor is probably openness. This involves not only making information available, but giving a candid account of the evidence underlying decisions. If there are genuine reasons for non-disclosure of data, the reasons need to be given both clearly and early on. The point is that there should be a presumption in favour of disclosure. There is also a need to consider the openness of the decision process. Can outsiders see how decisions are reached? Who gets to contribute, and at what stage? There is a reluctance to trust any system that has the appearance of a "closed shop". People need to know that their own concerns - and their own understandings of the risk in question - can be heard, and be taken seriously. Those who feel belittled, ignored or excluded are liable to react with hostility even to full disclosure of information.

2.2 Risk Perceptions: "Fright factors"

Box 1: Fright Factors:
Risks are generally more worrying (and less acceptable) if perceived:
1. to be involuntary (e.g. exposure to pollution) rather than voluntary (e.g. dangerous sports or smoking)
2. as inequitably distributed (some benefit while others suffer the consequences)
3. as inescapable by taking personal precautions.
4. to arise from an unfamiliar or novel source
5. to result from man-made, rather than natural sources
6. to cause hidden and irreversible damage, e.g. through onset of illness many years after exposure
7. to pose some particular danger to small children or pregnant women or more generally to future generations
8. to threaten a form of death (or illness/injury) arousing particular dread
9. to damage identifiable rather than anonymous victims
10. to be poorly understood by science
11. as subject to contradictory statements from responsible sources (or, even worse, from the same source).

It should be stressed that these refer to perceptions. What matters here is not (say) whether a risk is "really" involuntary, but whether it is seen that way.
Because perceived risk is multi-dimensional, numerical measures (e.g. annual mortality statistics) will never be all-important. The more positive point is that reactions are far from random. It certainly should not come as a surprise when risks scoring highly against Fright Factors provoke a strong public response. These are the classic ingredients for a "scare". Conversely, it will be difficult to direct attention to low-scoring risks.

2.3 Risk and Values

Though Fright Factors may highlight certain types of risk at the expense of others, there is no basis for dismissing them as unreasonable per se. They may indeed reflect fundamental value judgements. Deaths, for example, are not all the same: it would be perverse to insist that risk of a fatal cancer "should" carry no more dread than the prospect of a sudden heart attack. Similarly, willingness to accept voluntary risks may reflect the value placed on personal autonomy. In this context, risk often has positive attractions: mountaineers and racing drivers take precautions to limit the risks they take, but facing some risk is undeniably part of the fun. The general point is that responses to risk are dependent not only on context, but on personal values, political and moral beliefs, attitudes toward technology, and so on. All help determine which fright factors most frighten and what sources (and forms) of information are trusted.
Clearly, people's beliefs and values differ widely. ...This can depend strongly on approval or disapproval of the source of risk on other grounds. Those who love (or hate) the motor car anyway are usually more (or less) sanguine about its health risks.
....perceived benefits matter. Possible negative outcomes are, after all, usually only half of the equation. Benefits need not be material goods - "intangible" examples include convenience (open-platform buses are popular, despite the acknowledged risks). Indeed, there is evidence that people often respond to safety measures (e.g. more effective brakes on cars) by taking benefits that return their risk to its previous level (driving faster).
- Attitudes to risk depend critically on perceived benefits - or lack of them.
- Where risks are seen to be under personal control, reduction may not be wanted: people may prefer to take other benefits, or even welcome risk.
Rather than simply considering "the public" as a homogeneous mass, there is a need to consider the possible values held by key stakeholders or audiences.
Predicting individual responses to specific issues therefore remains elusive. The same can be true even of organisations: for example a consumer group might take a view stressing freedom of individual choice or the need for protective regulation.

2.5 Understanding Probability

Heuristics and Biases

Risk is essentially to do with chance - the likelihood of an unpleasant outcome. The accepted measure of likelihood is probability, and probabilities obey well-known mathematical laws. However the brain tends to manipulate them in ways that can ignore this logic. The problem is that simplified ways of managing information (or heuristics) serve well enough in most situations, but give misleading results in others. Left unchecked, they lead to various common biases in dealing with probabilities. Some of the most relevant are as follows.

  • Availability bias: events are perceived to be more frequent if examples are easily brought to mind, so - as already noted - memorable events seem more common.
  • Confirmation bias: once a view has been formed, new evidence is generally made to fit: contrary information is filtered out, ambiguous data interpreted as confirmation, and consistent information seen as "proof positive". One's own actions can also make expectations self-fulfilling.
  • Overconfidence: we think our predictions or estimates are more likely to be correct than they really are. This bias appears to affect almost all professions, scientific or otherwise, as well as the lay public. The few exceptions are those who - like weather forecasters - receive constant feedback on the accuracy of their predictions.

Both experts and public are prone to biases in assessing probabilities (e.g. availability, confirmation, overconfidence).

  • On getting new information, there is a tendency to forget about baseline probabilities.
  • Relative risks sound newsworthy, but can be seriously misleading if the baseline risk is not clear.

Framing effects

Any situation involving risk can be "framed" in different ways, in terms of how the available information is mentally arranged. This can have a major - and often unrealised - effect on the conclusions reached.
A common example is that outcomes may be measured against different reference points - as with the bottle half-full or half-empty. The possible impact on decisions can be demonstrated experimentally by presenting the same choice in different ways. A typical study presented people with a hypothetical choice between two cancer therapies with different probabilities of success and failure. Half were told about the relative chances of dying while the rest had the same information presented in terms of survival rates. This more than doubled the numbers choosing one alternative. Neither way of looking at the problem is wrong, but the results are strikingly different. Perhaps most strikingly, the effect was just as great for physicians as for the general public. Similar studies show consistent results. One specific point is that - like with gamblers who bet more wildly to recoup losses - people tend to make riskier choices if all alternatives are framed in terms of possible losses, but "play safe" if choosing between alternative gains.
More generally, it is worth reiterating the point that people can approach risks from completely different frames of reference. If a regulator sets some allowable maximum level for a pollutant, is it protecting the public from a risk, or legitimising one?

2.6 Scientific and Lay Perspectives

...there can also be differences in the assumed onus of proof for cause and effect. Most of the time - though not invariably - scientists will accept the existence of a causal link only once there is good evidence for it. Until then, links are "provisionally rejected". The lay view is much more likely to entertain a link that seems intuitively plausible, and reject it - if at all - only if there is strong evidence against. The difference is partly one of values: is it worse to accept an erroneous link or reject a real one? In risk communication, the issue cannot be resolved simply by insisting that scientific viewpoint is the correct one. In particular, public concerns are unlikely to be assuaged simply by reassurance couched in terms of there being "no evidence for" a causal link. To engage seriously with its alleged existence may require one, first, to acknowledge the face plausibility of a link, secondly to set out what evidence one would expect to find if it existed, and finally to show how serious investigation has not found such evidence. If the last two steps cannot both be provided, then a "no evidence" argument is anyway of little value. Even if they can, face plausibility often still beats absence of evidence.

2.7 Indirect Effects and the "Social Amplification" of risk

The mass media clearly play a major role. Reportage affects both perceptions of risk in general and how specific issues are initially framed. Then as an episode develops, reports of people's reactions to the original risk feed the indirect effects. However the mass media are not all-important. Professional (e.g. medical) networks are often also significant, as are informal networks of friends and acquaintances - the classic "grapevine". People typically trust the goodwill of family and friends more than any institutional source, while access to decentralised media such as the Internet increases the influence of self-organised networks. In any case, to blame "sensationalist reporting" for exaggerated fears is largely to miss the point. Media coverage may well amplify the public's interest in dramatic forms of mishap, but it does not create it. A "good story" is one in which public and media interest reinforce each other.
Of the triggers listed, there is some evidence that the single most important is blame, particularly in keeping a story running for a long period. However each case may be affected by many factors, including chance (e.g. a shortage or glut of competing stories). Once a "story" is established, the very fact that there is interest in the topic becomes a related story. The result can be that reportage "snowballs" as media compete for coverage.

Box 6: Media Triggers

A possible risk to public health is more likely to become a major story if the following are prominent or can readily be made to become so:

1. Questions of blame
2. Alleged secrets and attempted "cover-ups"
3. "Human interest" through identifiable heroes, villains, dupes, etc. (as well as victims)
4. Links with existing high-profile issues or personalities
5. Conflict
6. Signal value: the story as a portent of further ills ("What next?")
7. Many people exposed to the risk, even if at low levels ("It could be you!").
8. Strong visual impact (e.g. pictures of suffering)
9. Links to sex and/or crime


3.2 Aims and Stakeholders

A decision-based perspective should highlight one key question: what are we trying to achieve? Answering it requires a clear view of who the relevant stakeholders are - those both affected by the issue and having some possible effect on it. The very exercise of listing stakeholders and how they might react can be highly informative in clarifying one's own aims.
Politicians and the media are inevitably stakeholders in any major public health issue. So too are the public - though seldom as a homogeneous mass. Specific stakeholders might include different medical professions, charities and campaigning groups, various government departments and agencies, specific businesses, local authorities, and so on.

3.3 Contingency Planning and "Assumption Busting"

A common pattern of failure is of able decision-makers (and their advisers) becoming fixed on a particular set of assumptions. Uncertainties are assumed away and alternative views ignored - even in private. Lack of evidence for some effect into may be translated into "proof" of its non-existence, or possible responses to risk simply dismissed as unthinkable. Problems may be defined in ways that simply omit significant public concerns. Caveats in relating laboratory to field conditions may get assumed away.
If an initial public position has to be modified, preparedness will limit the damage by allowing change to be both prompt and convincingly explained. There is therefore a need for determined "assumption-busting"early on. In the case of scientific assumptions, clues as to which assumptions to vary can be found by looking critically at the "pedigree" of key evidence - how it was generated and by whom. But sometimes even the highest-pedigree assumptions turn out to be mistaken, and there is often a need to look at non-orthodox views.
...there are strong arguments for at least acknowledging uncertainty in public. Doing so - it will be objected - risks flying in the face of demands for certainty from public, media and policy-makers alike. While not denying that there can be massive pressure for premature closure of debate, there is some evidence that the public is more tolerant of uncertainty honestly admitted than is often supposed. Indeed, a plethora of supposedly-certain statements may only fuel the cynical belief that anything can be "proven". The risks of appearing closed-minded can also be great. On balance, acknowledging uncertainty often carries fewer dangers, even if it is not a route to instant popularity.
Overcommitment to a given line often inhibits preparedness for any change, as well as making the change itself more dramatic.

HM Treasury, Managing risks to the public - appraisal guidance, Appendix A: Concern Assessment Tool, 2005 [pdf, 623Kb]

A1. This Appendix sets out a framework for understanding people's concerns in order that they can be considered in policy development and in the development of related consultation arrangements and communication strategies. A good understanding of relevant concerns is necessary for developing an effective risk management strategy although the effort expended should be proportionate to the risk in question. The information gained on relevant concerns should inform and assist the development and selection of policy options and the development of the associated communications strategy. For example, a public information programme can be implemented if it is discovered that public concern stems from a lack of understanding about the risk. Being responsive to public concerns and involving the public in decision-making, helps to improve the accountability and transparency of risk management.
A2. The framework is based on [a] psychometric model of risk perception ... in which characteristics of a risk are correlated with its acceptance. For example, risks that are undertaken voluntarily are generally considered more acceptable than risks that are imposed without consent. Similarly, risks that cause dreaded forms of harm are also considered to be less acceptable.
A3. The assessment framework is based around six risk characteristics that research suggests are indicators of public concern. These six indicators were chosen as being reasonably transparent, representative indicators of public concern which, from the available research would correlate well with almost any other set that is likely to be proposed.
A4. Two of the characteristics relate to the nature of the hazard (Familiarity and Experience; and Understanding), two relate to the risk's consequences (Fear or Dread; and Equity and Benefits) and two relate to risk management (Control and Trust). Research indicates that each characteristic is correlated with concern so, for example, risks that are perceived to be highly uncontrollable would be expected to associate with a high level of concern. By collecting evidence about these indicators, the framework can help understand the likely nature and strength of concern and its drivers.
A5. Existing public perceptions of risk should be assessed as objectively as possible before policy solutions and communication strategies are designed. This requires an approach that is as open as possible in the early stages of engagement, to enable the public to express what they truly understand, and how and why they feel about a particular risk or set of risks. It is important in carrying out the communication that it addresses all relevant parts of the public to ensure that a representative cross section are reached.
A7. Each indicator should be scored on a 5-point scale by reviewing relevant evidence obtained from interviews, focus groups, review of media material, etc. For example, two elements to score the first indicator (Familiarity and Experience) are: How familiar are people with the hazard? [and] What is the extent of their experience?
A8. For each piece of evidence a number of bulleted questions act as prompts to explore related issues. For example, the first element under familiarity and experience - how familiar are people of the hazard? - has three further prompt questions: How familiar is the public with the hazard? Are all sections of society familiar, or is familiarity confined to specific groups? Are those exposed to risk familiar with it?
A9. These prompts are intended to give an indication of the range of issues that should be explored to collect enough relevant evidence to come to a decision on the extent of concern and not as literal questions to be asked (e.g. as a questionnaire). They are indicative and not prescriptive or exhaustive lists.
A12. Possible policy responses to each indicator should be entered into the scoring table. Suggested policy responses are discussed in Chapter 4, paragraphs 4.16 - 4.22. It is intended that the information on concerns should be used to inform but not constrain decisions on policy developments, options etc. and on consultation and communications strategies.
A25. The framework does not attempt to integrate or aggregate scores from the six indicators into an estimate of total concern because the categories are not wholly independent of each other. Moreover, the main strengths of the assessment framework are its ability to provide information on the nature of the concern and to understand how the views of different groups differ. Attempting to aggregate scores into a total will lose the information on the origins of those concerns and can mask differences of opinion between stakeholders. Care should also be taken to avoid double counting where one issue clearly drives scores under several, or all, categories.

Government's account of how they use objective and subjective evidence in drugs policy making: Government reply to Parliamentary Science and Technology Committee’s Drug classification: making a hash of it? [pdf, 1.23Mb]

Reply to recommendation 31:
“Decisions made by Government on classification matters rightly attract considerable interest and, in many cases, polarise views. The Government has made significant efforts to make very clear the reasons why it has classified or reclassified a drug, whether to Parliament or the public.
The drug classification system is not a simple measure of medical or social harms caused by drugs. Whilst these measures are at its very core and cannot be overstated, it represents a more complex assessment from a wide range of sources to ensure that any decision to classify or reclassify a drug is as unbiased and objective as possible.
In response to the Committee’s findings, the Government is pleased to set out the criteria it adopts when making classification decisions.
Decisions are based on 2 broad criteria – (1) scientific knowledge (medical, social scientific, economic, risk assessment) and (2) political and public knowledge (social values, political vision, historical precedent, cultural preference). Decisions must take account of scientific knowledge of medical harms, and social and economic evidence, as well as the insight provided by public consultation, and the knowledge and understanding provided by public bodies and Government departments”.

This is Government’s first statement ever concerning its decision making process. This is a great statement of how Government decision making should be:
(a) Government intends “to make very clear the reasons” for classification decisions;
(b) it clearly distinguishes objective factors, such as scientific knowledge, from subjective factors, such as political and public knowledge;
(c) it clearly states that objective factors, measures of “medical or social harms”, are at the classification system’s “very core and cannot be overstated”;
(d) it states that subjective factors must also be taken into account “to ensure that any decision …is as unbiased and objective as possible”.
It is not necessarily paradoxical in (d) for subjective factors to be taken into account to achieve greater objectivity. Taking into account objective evidence of subjective factors, like public opinion, is valid. Relying on subjective factors unsupported by objective evidence would be invalid. So Government could take into account objective evidence of public opinion like a properly conducted social science opinion poll but Government should not take into account assumptions about public opinion that lack objective evidence or reports in the media whose objectivity may be questionable. Any such poll would have to consider both the majority of the public and minorities.

5. Justification for unequal treatment of equally harmful legal and illegal drugs:

Reply to recommendation 50:
“The Government fully agrees that the drug classification system under the Misuse of Drugs Act is not a suitable mechanism for regulating legal substances such as alcohol and tobacco. The distinction between legal and illegal substances is not unequivocally based on pharmacology, economic or risk benefit analysis. It is also based in large part on historical and cultural precedents. A classification system that applies to legal as well as illegal substances would be unacceptable to the vast majority of people who use, for example alcohol, responsibly and would conflict with deeply embedded historical tradition and tolerance of consumption of a number of substances that alter mental functioning (ranging from caffeine to alcohol and tobacco). Legal substances are therefore regulated through other means. However the Government acknowledges that alcohol and tobacco account for more health problems and deaths than illicit drugs and this is why the Government intervenes in many ways to prevent, minimise and deal with the consequences of the harms caused by these substances through its dedicated Alcohol Harm Reduction Strategy and its smoking/tobacco programme. At the core of this work, which is given considerable resources, is a series of education and communication measures aimed at achieving long term change in attitudes. It is through this that the public continues to be informed in an effective and credible manner”.

This statement is the first time Government has presented any reasons for treating differently consumers and traders of equally harmful legal and illegal drugs.
Firstly they state that the distinction between legal and illegal drugs is not based on objective factors such as “pharmacology, economic or risk benefit analysis” since “Government acknowledges that alcohol and tobacco account for more health problems and deaths than illicit drugs”. Instead Government admits that the distinction is “based in large part” on the subjective factors of “historical and cultural precedents”. Government appears to accept the ACMD’s observation that “these distinctions are based on historical and cultural factors” but ignores their criticism that the distinctions “lack a consistent and objective basis”. Here Government admits that their decision to treat equally harmful legal and illegal drugs differently is “based in large part” on subjective factors, directly contradicting their assertion that objective factors are at the classification system’s “very core and cannot be overstated” (see 4. above). Is Government taking into account these subjective factors “to ensure that any decision …is as unbiased and objective as possible”? The reverse appears to be true. The phrase is merely a description of cultural discrimination, not a justification; both sexism and racism are examples of distinctions that are based on “historical and cultural precedents”. The reply to Recommendation 31 also mentioned “cultural preference” as a subjective factor yet it seems certain that this factor is taken into account with bias, referring only to the preferences of the majority and not to the preferences of minorities who prefer different drugs to the majority.
The second reason Government gives for treating legal drugs differently from illegal drugs is that treating legal drugs like illegal drugs would be unacceptable to the public because it would involve prohibiting responsible drug use. Government does NOT address the option of treating illegal drugs like legal drugs. This is a clear indication that Government recognises that the distinction between responsible, reasonably safe drug consumption or trade and irresponsible, unreasonably harmful drug consumption or trade is a justifiable distinction. But is it true that legal drugs can be used responsibly but illegal drugs can not? The 2002 Home Affairs Committee report The Government’s Drug Policy: is it working? stated:
“20. While around four million people use illicit drugs each year, most of those people do not appear to experience harm from their drug use, nor do they cause harm to others as a result of their habit”.
 Indeed the unacceptability of prohibition of responsible drug use is the very reason why 10% of citizens do not comply with the blanket prohibition of illegal drugs. To be credible, drug laws must accurately reflect the relative harms of different patterns of drug use, as they do with legal drugs. Currently the Government’s implementation of the Misuse of Drugs Act fails to justifiably discriminate between those who consume or trade drugs reasonably safely, responsibly and those who consume or trade drugs unreasonably harmfully, irresponsibly. The Act itself makes clear it is only concerned with drug misuse “capable of having harmful effects sufficient to constitute a social problem” and NOT with drug misuse that does not “constitute a social problem”.