Tuesday 30 October 2018

Changes of life expectancy


Hodder Valley, Lancashire, UK.
In a recent Post I indicated that the maximum life-span of a human being is about 105 years, a life longer than this being very exceptional. However a shorter life-span is common due to individuals encountering premature life-ending events such as major injury or disease. If these are not encountered, then life will ultimately come to an end because of “old age”. Death from old age is the result of frailty or a lack of physiological reserve, effectively a combination of mitochondrial failure and an exhaustion of stem cells.

Average life expectancy has been increasing in past decades during which more and more people have been approaching the maximum life expectancy, but most have not quite got there. We can see for example a steady increase in the UK of people living beyond their 90th birthday. This is no sign yet of this slowing down. 

Figure 1. 90 year olds in the UK

Up to about 1870, average life expectancy in Europe and America was about 40 years, and this was due to a large number of deaths in childhood. One of my great grandfathers, Richard Alston (who I never met), was the youngest of twelve children born on a farm in the Hodder Valley, very close to where I live now (I can see the farm-house from my bedroom window). Of the twelve children, only four survived to be adults. Richard moved to Manchester, where he married and had ten children, just four surviving to be adults. 

Such stories were usual but nevertheless tragic. In 1840 in the UK, average life expectancy from birth was only about 42 years. But if a child managed to survive to the age of 10 in 1840, then the average life expectancy would be about 57 years. This is shown in Figure 2.
Figure 2. Life expectancy by age

We can see in Figure 2 increases in life expectancy within the 20th century, especially life expectancy from birth. The major improvement predated the antibiotic era, which started after 1945. It would almost certainly be the result of civil engineering changes, new housing with wider streets, sanitation and waste disposal, piped clean water, more food, domestic heating with delivered coal, improved schooling. This has been a world-wide phenomenon, as shown in Figure 3.

Figure 3. Changing life expectancy throughout the world

There would not have been many 70 year olds in 1840 (born 1770) and their life expectancy beyond the age of 70 was on average only about two years. The life expectancy of 70 year olds changed little during the century 1840 to 1940 (Figure 2). 

However there was an improvement seen in 1980 and thereafter. This was the first time that there was an increased life expectancy of 70 year olds, of course resulting in the great increase in those living to beyond their 90 birthday (Figure 1). This was the result of the end of the epidemic of high-mortality CHD (coronary heart disease) and a major decline of deaths from this cause.

Although many people died from CHD in middle age (or younger), most deaths occurred in people beyond the age of 70 years. It is therefore this age-group that has seen an unprecedented increase in life expectancy.

As well as major improvements in infant mortality rates in industrialised countries worldwide, the latter half of the 20th century also saw a dramatic worldwide reduction of maternal deaths. In the 21st century this has also happened in the non-industrial world, for example Ethiopia. This is shown in Figure 4.


Figure 4. Changing maternal mortality.

Since 1980 there has been a steady increase in life expectancy , and this is shown in the countries making up the UK. This is seen in males and females, as in Figures 5 and 6.

Figure 5. Life expectancy in UK nations (Males)

Figure 6. Life expectancy in UK nations (Females)




This dramatic recent improvement in life expectancy has been mainly the result of the end of the epidemic of CHD. The improvement could not be expected to continue indefinitely, and there has been a levelling-off of the decrease during the past couple of years. This is expected and a new steady state is inevitable. But there appears to be a  marginal decrease in life expectancy in Scotland and perhaps Northern Ireland in 2017. 

This is a cause for concern and it requires explanation. It is obviously the result of premature deaths, that is below the 2010 average life expectancy of 81 years.  It is not clear which diseases have been responsible for this. 

There is one disturbing fact: There has been an increase in lung cancer deaths among people who have never smoked, and 80% of these are women, often quite young. Deaths from lung cancer in women who have never smoked now exceed deaths from breast cancer and ovarian cancer combined. The reason for this is unknown.

If we look back at Figure 3 we can see that the increase in average life expectancy in African countries was interrupted by a reduction between 1990 and 2000. This would be explained by the epidemic of AIDS related deaths occurring at that time. There does not appear to be a cause for the apparent reduction of life expectancy in Scotland and Northern Ireland, and perhaps it is just an aberration that will have disappeared at the next annual report.

The end of the epidemic of CHD (coronary heart disease) has had a major beneficial effect on average life expectancy in the UK.  If we encounter another epidemic, then the average life expectancy will obviously diminish.

Otherwise the average life expectancy should maintain a steady state. This will change if one of two things happens. First, if there is a rapid a significant reduction of deaths from another given and important cause of premature death (as has been the case with CHD deaths), then average age at death will increase. What diseases are likely to diminish? 

If on the other hand there develops a sudden increase in deaths from a specific disease, an epidemic of something new, then average age at death will diminish.

It is unlikely that deaths from “all causes” will either diminish or increase to any significant degree. It is the change in the death rates from specific diseases that are much more likely to be significant.

Hodder Valley, Lancashire, UK.


Friday 21 September 2018

Decline of coronary heart disease in three generations

Coronary heart disease in three generations



It might be assumed that coronary heart disease (CHD) has always been with us, that it is with us today, and that it will be with us always. But during the past century CHD has changed both in quantity and in quality.

Early 20th century

It appears that CHD was very rare during the first two decades of the 20th century, and it was a rare cause of death. Things changed in the mid-1920s.

This was described very clearly by Dr Maurice Campbell, a leading UK cardiologist during the 1960s. During the early part of the 20th century there was a gradual reduction of number of deaths from disease of the heart valves, which were due to syphilitic and rheumatic heart diseases. Details can be seen in a previous Post: “The onset of the epidemic of Coronary Heart Disease”.

While deaths from diseases of the heart valves were reducing, the total number of deaths from heart disease showed an increase. Something new was happening, either a new disease was emerging or there was an unexplained major increase of a form of heart disease that was previously uncommon and unimportant. 


Figure 1. The emergence of coronary heart (artery) disease
This is clear in Figure 1. We can see the blue graph-line that indicates all heart deaths. There is an increase after about 1923 and the increase is persistent. At the same time deaths from valvular heart disease (yellow graph-line, VHD) is declining. The red graph-line shows the emergence of coronary heart disease, here labelled as CAD, coronary artery disease, an alternative term).

The emergence of coronary heart disease

The increase of heart disease deaths continued exponentially to a peak in about 1970, and thereafter fell rapidly. It became clear that the “new” heart disease was coronary heart disease, CHD.


Figure2. The epidemic of CHD in the UK

This recognition was based on autopsy studies, and it is this, the pathology, that defines the disease. We can argue about a diagnosis based on clinical features: are there changes in nomenclature, or is there more case-finding? For example the lay term "heart attack” is used very loosely and cannot be the basis of a specific disease. Myocardial infarction (MI) is the specific medical term, based on clinical features together with autopsy findings, and in this we find certainty. 

The pathology of coronary heart disease 

The pathology identified the disease particularly in the walls of the coronary arteries of the heart. There was an inflammatory basis to the disease that was occurring in what are called “plaques”. Autopsy following typical death from myocardial infarction identified the plaques rupturing through the lining of the coronary arteries into the lumen. This process is called plaque rupture, very much like an erupting boil. It could initiate a blood clot, causing complete occlusion of the coronary artery, with sudden death or the clinical features of myocardial infarction. This was something new.

Figure 3. Illustration of plaque rupture and thrombosis
Although CHD deaths were most common in late middle age, they could also cause death young adults. It became clear that the pathological process developed at an early age, identified in autopsies perfumed on young adults who had died as the result of injury.

The epidemic/pandemic of coronary heart disease

The epidemic nature of the new CHD in the UK is obvious in retrospect but most people seem to be unaware of it. A mild form of CHD  continues mainly in the elderly but the death rate has declined considerably. 

Is the decline a result of preventative medical interventions, and in particular the widespread use of statins? The answer is clearly “No”. Despite what we are told, statins have only a minor effect, reducing ten year mortality by just 1%. And statins were only widely used after the year 2000. 

Autopsy studies in young men

Important pathological investigations have demonstrated that the epidemic of CHD has been in a dramatic and spontaneous decline, and this is not due to medical interventions. This information has been based on the autopsies performed on young US soldiers killed in military action. It has been possible to compare three generations in three episodes of warfare: The Korean war (1951–1953), the Vietnam war (1968–1978), and the Iraq and Afghanistan wars (2000–2011).

Medical services were very advanced during these wars, based on experience gained during the two world wars earlier in the 20th century. Autopsies were performed regularly and this allowed detection of diseases and abnormalities incidental to the injuries causing death.

In the second half of the 20th century, coronary heart disease was recognised as an increasing public health problem and major cause of death. The experience in the Korean war led to the confirmation that onset of the disease was in early adult life, or even in late childhood.

Korean war, 1951–1953

Autopsies performed during the Korean war identified the presence of coronary heart disease in almost 80% of the soldiers killed in action, and severe disease in almost 20%. 

Figure 4. Evidence of CHD in autopsies of soldier killed in action

This shows the very serious nature of the epidemic of CHD as it was developing in the 1950s. The soldiers would be in the age range 18–28 years, and this was the recognition that the origins of CHD were early in life. They would probably have acquired the disease in the 1930s. Had they not been killed in action, they would no doubt have died from CHD about 30 years later, in about between 1970 and 1980.

Vietnam war, 1968–1978

The next major US war was in Vietnam, between 1968 and 1978. The soldiers would be of the next generation, born in the 1950s with parents born in the 1920s. Once again deaths in conflict allowed the assessment of incidental CHD, and it was about half of what was found in the Korean war. There was evidence of CHD in 46% of autopsies, and of severe disease in 8%.


Figure 5. Evidence of CHD in autopsies of soldier killed in action

This was a remarkable reduction in the prevalence of CHD, but it seemed to provoke no curiosity at the time. The reduction cannot be explained on the basis of medical intervention or change in diet.

Iraq and Afghan wars, 2000–2011

Then came the wars in Iraq and Afghanistan, after an interval of thirty years, another generation. The prevalence of incidental CHD had on this occasion fallen much more dramatically There was evidence of CHD in fewer than 10% of autopsies, and severe disease in only 2%.


Figure 6. Evidence of CHD in autopsies of soldier killed in action

Changing prevalence of CHD

The very high prevalence of CHD in autopsies of young men in the 1950s was frightening, and it was of course a predictor of the peak of the epidemic of CHD deaths that came to be 20 years later. 

The dramatic decline of evidence of CHD in young men cannot be explained in conventional dietary terms. It was clearly a spontaneous decline of a serious disease causing many deaths of pandemic proportions.


This study of autopsies performed on soldiers killed in action is the most objective and uncontroversial evidence of the decline of CHD.

Such autospsy evidence did not emerge from the two earlier world wars. At that time in the first half of the 20th century CHD was not a significant public health problem, and routine autopsy services in conflict would not have been possible.   



Due to a micro-organism?

I have explained in previous Posts that the only plausible explanation for the cause of the pandemic of CHD was a novel micro-organism. Its rapid decline would be the development immunity acquired initially by the first generation. Antibodies would have developed even though the disease might have been fatal. This acquired immunity would be transmitted to the next generations though natural inheritance, amplified by subsequent generations if the disease were still active.  

This is exactly what we see in these wartime studies that the disease declined rapidly within three successive generations.


Source of data:

Dalen JE et al, American Journal of Medicine Volume 127, Issue 9, Pages 807–812




Wednesday 22 August 2018

We cannot live for ever – we are programmed to die

The limitations of the duration of life



"We have a maximum lifespan set by our own evolutionary history, which ultimately depends on the complexity of synaptic connections in our brains, and the size of stem cell populations in other tissues."

Nick Lane - "The Vital Question"


(Stem cells are cells of embryonic origin. They are found in most organs and tissues of the body, and when necessary they can differentiate into mature functioning specialised cells. They can also divide to produce more of the same type of stem cells.)


Average life expectancy has increased during the past thirty years and this has led to extravagant claims as to how long the human being might live in the future. 

Japan has been viewed as having many very old people but it appears that there is fraud. Investigation has demonstrated that most of these very old people are in fact dead.Their deaths had not been officially declared, so that their relatives have been able to continue to receive their financial benefits.

The longest documented human life was that of Jean Calment whose death in France was certified in 1997 at the age of 122 years. 


Jean Calment
Italy is known to have a relatively large number of very elderly persons and research there has shown that life expectancy tends to plateau at about 105 years. Life beyond 105 is very exceptional.

We read in the Book of Genesis that many of the patriarchs lived exceptionally long lives, but we have no way of knowing whether a year then was the same as a year now.



My recent Post indicted that among UK doctors, 50% lived beyond the age of 87 years. This is remarkable. It is the median measure, which is more useful than the arithmetic mean as this is influenced by the exceptional extreme. 

It can be seen clearly in Figure 1, which displays the ages at death of a sample of 568 UK doctors dying in 2016 and 2017. Each vertical column represents one person dying, and the height of the column indicates the age at death. Obviously as we are dealing with doctors only, childhood deaths are not shown, but we know that today childhood deaths are fortunately rare in the UK. 


Figure 1. Ages at death of UK doctors, 2016–17

We can see the vertical 50% line, the middle of the range, and this is the median of age at death, 87 years. What is happening at present is that this median age at death is increasing, more people surviving beyond their 87th birthday. In a short time the median age at death might be 90 years. 

Increasing average life expectancy

Remember that someone dying now at the age of 87 years was born in 1931, a time of much greater austerity than now, an era before modern medicine and before antibiotics. The next generation would have been born in about 1961, a very different and more prosperous era. They are now about 57 years of age, and when they die in the middle of the present century, the median age at death might be 97 years. Imagine the social impact of 50% of adults living beyond their 97th birthday. But this is not far-fetched. 

So more and more people will live to see their 105th birthday, but not beyond. This is a much more likely scenario than the maximum possible age at death increasing. It is shown in Figure 2. The blue columns illustrate the ages at death today of  hypothetical individuals, and the green columns illustrate the likely ages at death in thirty years time, the deaths of the next generation. People are living longer and more will achieve the maximum age of 105 years. It will be exceptional to live beyond this and maximum length of life will not be significantly exceeded.


Figure 2. Hypothetical age at death of two generations
If overall survival of the population is increasing, it is because deaths from something have come to an end, and we have previously seen the main reason for so many people now surviving into very old age. It is the end of the epidemic (pandemic) of coronary heart disease (CHD). This killed millions of people in middle age or early old age during the second half of the 20th century. Now that the epidemic is at an end, many more people are living into very old age. We can see in Figure 3 the rapid and continuing increase in the number of those reaching their 100th birthdays.

The peak of deaths from CHD was in about 1970, and the rapid decline of these deaths is responsible for the rapid increase in very old people that we see in Figure 3. This mirrors the decline of deaths from CHD.


Figure 3. Number of centenarians in the UK
The slow-down of life expectancy

There has been recent publicity in the UK to the effect that there is a slow-down of the increase in average life expectancy that we have witnessed since 1970. 

".... from 2006 to 2011 life expectancy at birth for females in the UK rose by 12.9 weeks per year, but between 2011 and 2016 the increase dropped to 1.2 weeks per year." (Guardian 08-08-2018).  In males the corresponding figures are 17.3 weeks and 4.2 weeks. "[This] has been observed in all countries in Europe, North America, and Australia". [Data from the UK National Office of Statistics]

This is not a "bad thing", and it is nothing to do with claims that it is the result of government policy or austerity. We are not now living for a shorter time, but a longer time. The increase in life expectancy during since 1970 has been the result of the end of the epidemic of CHD, but the dramatic effect of this can last for only a few decades. We are now entering the end of that effect with an inevitable slowing down of the increase of average life expectancy. As people will continue to die before the age of 105 because of disease, it is likely that we will move into a new steady state – until the next pandemic arrives.

We must think about the major diseases that in the early 21st century cause premature death, and the most obvious are cancers. When the survival from cancers improves dramatically (or the incidence drops dramatically) we will see another increase in overall survival. As will be described below, there is at present nothing that can be done about strictly age-related causes of death.

Programmed death

Each animal species appears to have a specific limit of duration of life, that is programmed death. This is specific to a species and obviously indicates a genetic control over the duration of life. We will look at this shortly. There is however an important variation among individuals.

Another factor to consider is that death in old age does not come suddenly. There is a progressive deterioration during most of our lives, a deterioration of physical performance and also a deterioration of what is called "physiological reserve". This means that recovery from a given illness is prolonged, and in old age a minor illness can lead rapidly to the end of life.

We can appreciate the development of frailty, a reduction of exercise capacity and the visible appearance of ageing in the skin in particular. With increasing age there is a greater susceptibility to infections and of more importance a greater mortality or a longer recovery time.

A simple way of envisaging life and death is as follows. Let us accept that about 105 is the age beyond which human life will not normally  exist. It is as though at birth we have 105 units of "life force". We lose one unit each year and when they are all gone, we die, we "conk out" like a car that has run out of fuel.
Figure 4. Simple view of programmed death
This is perhaps not strictly true as at birth our body systems have not developed their full power. This would occur at about the age of 20 years, when we have maximum strength and fitness, unless of course disease occurs. In Figure 5 we can see the increase in physiological capacity to the age of about 20 years. This maximum is that of the 20 year-old fully grown individual, but it is possible that physiological capacity per kilogram of body weight is greater at a younger age.

By the age of 70 years there will be significant frailty with physiological capacity, "life force" at about only 30% of maximum.


Figure 5. Adjusted life-long pattern of "life force".

Illness modifies this profile, and there is wide individual variation, especially later in life. Figure 6 illustrates an individual who maintains greater physiological activity into later life (60 units remaining at age 70) compared to Figure 5 (30 units remaining at age 70).
Figure 6. Good vitality in old age.

Disease diminishes our natural life force, sometimes permanently, shown in Figure 7. In this case chronic illness develops early in life. There would be a maximum life force of only 80 units, with considerable disability and early death at or before the age of 60 years.
Figure 7. Life profile of life-long chronic illness.
There can be complete recovery from serious illness early in life, when there is considerable physiological reserve. This is clear from Figure 8.
Figure 8. Serious illness early in life, with full recovery.

In Figure 8 there is complete recovery from what has been a life-threatening illness (temporarily reducing "life force" to only 30 units), but ultimately there is full life expectancy.


Figure 9. Serious acute illness later in life with incomplete recovery.
An acute illness later in life (Figure 9) mightl result in a very close encounter with death, but with incomplete recovery and reduced life expectancy.

Towards the end of life a relatively minor illness can be and often is fatal, whereas in earlier life there would be recovery. This is shown in Figure10.


Figure 10. End of life acute event in the elderly.
This is seen frequently. An elderly person appears to be "perfectly healthy" and active. But then something happens – an illness such as influenza or other infection, a fall, a bone fracture followed by surgery. The family cannot understand the rapid deterioration, but they will be unaware of the presence of a considerable reduction of physiological reserve. For the doctor the dilemma arises in certifying the cause of death. Is it really a fall? Or is it "old age", which means the exhaustion of physiological reserve. In the absence of illness or injury, death will inevitably result from "old age", and that is arguably one of the main purposes of medicine and public health.

The compression and decompression of morbidity

The late 20th century saw what has been called "the compression of morbidity". 


The compression of morbidity was characterised by the life pattern changing from that shown in Figure 7 (which could represent tuberculosis or a chronic respiratory disease such as bronchiectasis) to Figure 6.  

Rather than death being preceded by several years of frailty, it came to pass that life came to an abrupt end without any frailty. This is shown in Figure 11.


Figure 11. Sudden death and the compression of morbidity.
During the 20th century sudden death in adults became common and it could occur at any age. Death in mainly men between the ages of 18 and 30 years became common during the two world wars. Sudden death related to childbirth had a similar effect (Figure 12) but fortunately became less common during the 20th century and is now very rare.


Figure 12. Sudden premature death in an individual due to war or childbirth 

Such young men and women did not have the good fortune to live long enough to experience the frailty of old age. 

The compression of morbidity that was characteristic of the second half of the 20th century was the result of the pandemic of coronary heart disease / myocardial infarction. Figure 11 illustrates in particular the sudden death in healthy middle age that was so common  particularly around 1970, the peak of the pandemic. 

We can easily see that the "compression" of morbidity was brought about by sudden death in middle age. It was hardly a success of medicine but the result of events outside medical control. It is illustrated in Figure 13.


Figure 13. The compression of morbidity due to early sudden death.
The blue graph line shows the life pattern of a typical individual who at the age of 60 years was superficially healthy with a maximum life expectancy of a further 55 years. However sudden death occurred as the result of myocardial infarction, a common occurrence during the pandemic of CHD, by far the most common cause of death in 1970 and up to the turn of the century. 

This was robbery of expected life, but although robbed of living, the individual was also deprived of, or perhaps saved from, the progressive deterioration that would give rise to the typical frailty and morbidity of old age. This is illustrated by the green graph line, and the shaded area indicates the compression of morbidity.

Today, with a major reduction of deaths from CHD resulting from the end of the pandemic, life is expected to follow the green graph line, and morbidity will be decompressed. Many more people are now experiencing the morbidity of old age.

Now, in the early 21st century we see major health, social, and political pressures resulting from the rapid increase in the number of very elderly people. The green shaded area shown in Figure 13 is now coming to life. 

We now experiencing the "decompression of morbidity" and the serious public health pressure resulting from it.

Mitochondria and programmed death

The frailty of old age is difficult to measure and there are efforts to find a simple surrogate measurement, such as grip strength, walking, or balance. Generally these attempts measure muscle strength as this most obviously reduces with age, even in the absence of disease.

Then there is "physiological reserve", the innate ability to heal quickly, to repair tissues, and to restore function. Observation leads us to understand that children recover after an acute illness or injury much faster than would their parents and grandparents.

The basis of this is energy, energy to power muscles, to enable healing, and energy to drive metabolism and brain function. Energy is derived from the controlled breakdown of glucose, and to a lesser extent of fatty acids (especially in heart muscle). I do not wish to describe the biochemistry of energy production, but it is important to understand where the process takes place.

The proposal made in 1970 by Dr Lynn Margulis is that one of the major steps in evolution (more than 2 billion years ago) was the chance incorporation by endosymbiosis of a specific bacterium into the cell of different bacteria or archaea. This previously free-living  bacterium multiplied within the new host cell and ultimately became intracellular mitochondria. This might have happened only once in evolution, but it gave the cell a huge boost in energy production, allowing an enormous evolutionary advantage. The potential for reproduction was enhanced and also that for metabolic development.

The mitochondria are the site of biochemical energy production, the breakdown of glucose in the Kreb's cycle. Carbon dioxide and water are the metabolic results and energy is released. It is captured as adenosine triphosphate (ATP)  and used for metabolic purposes.


Mitochondria
Each cell contains 20,000–40,000 mitochondria, and each time the cell divides the mitochondria must replicate. This is where the process of ageing is thought to lie, and with it programmed death. 

Mitochondria have a very small genome, as during evolution most of the controlling mitochondrial DNA (1500 genes) migrated to the cell nucleus. When the cell is dividing the mitochondria are instructed to replicate. But replication is not always complete, and very gradually the number of mitochondria in each cells diminishes. 

In addition, during cell division mitochondrial mutations can occur, more with increasing age. Mutations will result in reduced mitochondrial function. Serous mutations result in a non-functioning cell, which in many (but not all) tissues will be replaced from stem cells.

In all cells mitochondria need constant replacement, and this is achieved by mitochondrial replication within the cell even when it is not dividing. Once again with ageing this process deteriorates, with reduction of mitochondrial numbers and function. In tissues without stem cells. the deteriorating cells cannot be replaced. 

As a result of these mitochondrial changes, energy levels reduce, and this is the mechanism behind the gradual reduction of what above I have called "life force". The full "life force" during early life is a full complement of fully functioning mitochondria, inherited from the cytoplasm of the ovum, and derived entirely from the mother.

The rate of decline of mitochondrial function will therefore occur in different organs at different rates, leading to variation in age-related organ failure. It is particularly important in cells with a high energy requirement, those with the highest metabolic activity. 

Skeletal muscle, heart muscle, brain and retina (including optic nerve) have the highest metabolic activity, the retina the highest of all. It is in these cells that the reduction in mitochondrial performance will have the most serious consequences in the form of age-related disabilities. Recovery by cell replacement is not possible because of the absence of stem cells in these tissues.

Some organs such as the liver in particular, have many stem cells to replace failing cells. The power of recovery of the liver after serious damage is remarkable, and age-related liver failure does not occur. Other organs do not have this advantage.

The brain is most obvious: each neuron ("brain cell") develops about 10,000 connections. If the neuron dies (by apoptosis, the removal of metabolically failing cells), these connections are lost for ever, together with life experiences and personality that are written into them.

And so in the absence of disease (including Alzheimer's disease) we ultimately experience age-related muscle weakness, heart failure, brain failure including memory loss, and macular degeneration with visual failure and blindness.

The point will ultimately be reached when mitochondrial replication and function will fall to a level that will be incompatible with life. Programmed death will then be reached.

We will not live for ever. We are programmed to die.



For further reading on the subject of the role of mitochondria in programmed death I would suggest:

Nick Lane - "The Vital Question"



This book is outstandingly good, but inevitably technical and far from an easy read. I have referred to it in a previous Post:


The origins of life - rock and water









Friday 27 July 2018

The UK NHS – 70 years but why was it necessary?



The UK NHS at 70 years  – but why was it necessary?


The UK is celebrating the introduction of the National Health Service, 70 years ago. It is however a time for reflection rather than celebration. The "celebrations" usually describe advances in medical practice that have occurred in many nations, advances that are far from specific to the UK NHS.

Why is it that in 1948 the UK introduced a nationalised health care system (known as the NHS) whereas this had not happened in other countries and has not happened subsequently outside the UK?

The NHS was not introduced out of ideology but out of necessity. The many independent hospitals were effectively bankrupt, and rescue by central government was the only acceptable option at the time. It proved to be very successful.


Blackburn Royal Infirmary, opened 1862

The voluntary hospitals of the UK, the infirmaries, had developed widely during the 19th century in response to an increasing number of industrial accidents. They were primarily centres of surgery, but staffed mainly by physicians; surgeons had a technical function without day-to-day patient care. 

The local authorities, the town councils, also developed hospitals, but they were more orientated to chronic diseases, and also tuberculosis, acute fevers, care of children, and maternity care. There was generally rivalry rather than co-ordination between the hospitals and the infirmaries.


The local authority hospitals were funded by the town council, but there was no formal or regular funding of the infirmaries. They had been set up by large donations from local industrialists, and funding continued to be on the basis of philanthropy. Donations from citizens were constantly required.

Bismark

During the late 19th century hospital activity was increasing substantially but philanthropy was diminishing. The infirmaries were drawing on their financial reserves. A new and regular source of funding was necessary and the answer lay in Germany, administered by Chancellor Otto von Bismark.

Bismark had introduced social insurance, an insurance system in which each week working people paid  a compulsory income-related tax into a fund, so that at times of need (illness, unemployment, major injury, old age) they would receive payment from the fund. This was an important social advance, very relevant to the new industrial nations.

Lloyd George

Social insurance was introduced in the UK in 1911, by the Liberal party Chancellor of the Exchequer David Lloyd George. It included general practice family medicine, but not hospital care. This had long-term ramifications that we experience in the 21st century.

General practice family medicine adapted well to social insurance, but the doctors were paid on the basis of an annual capitation fee, rather than payment by item of service which had developed in other European countries. 

The infirmaries were desperate for the introduction of social insurance. It would mean that, as in the German model, an episode of  hospital care would generate a bill, which would be passed from the patient to the insurance organisation that collected the social insurance fund. The vital part of the process was that an increase  activity would generate increasing income to the infirmary.

This is the model that proved to be very successful in other European countries. It was not introduced in the USA due to failure to pass legislation for a health tax that would be compulsory – the Statue of Liberty represents liberty from government laws.

But  in the UK hospital funding from the 1911 Social Insurance was not to be. The proposal proved to be unacceptable to the hospital consultant body of the British Medical Association (BMA). The consultants felt that a cash payment from patient to doctor was right and proper, and they appear to have given no consideration to the plight of the infirmaries.






Clement Attlee
And so the infirmaries continued after World Ward One with serious underfunding that was not sustainable. The crisis of the 1930s was postponed by the greater challenge of World War Two. However within government, rescue of the nation's infirmaries (including the major teaching hospitals) was being planned. The proposals were accepted by the new Labour government in 1945 (Prime Minister Clement Attlee) and the NHS was introduced on July 5th 1948. The key politician was the charismatic Welsh Minister of Health, Aneurin Bevan.

Aneurin Bevan

The option of local government taking over the infirmaries was against the wishes of the BMA and it would not have helped as a nation funding system was essential, along the lines of the German Social Insurance. But in the UK there was no insurance organisation that could deliver the enormous task ahead. A solution was urgent. The government stepped in to fund hospital and indeed all health care from central taxation. The expenditure of hospitals in 1947 was  provided as income for the hospitals in 1948 and beyond.

The opportunity was taken to integrate all hospitals within a community under a Hospital Management Board. Hospital care was free at the point of access.

The immediate crisis was solved, but activity increased at a greater rate, no doubt because it was now free to all, and this continued continually during the subsequent 70 years. The financial pressures were effectively transferred to central government, which had taken responsibility for funding health care through the new NHS.

If an insurance system of funding had been adopted, hospital income would have increased with activity. However funding as a block grant by central government has meant that activity and staffing (70% of expenditure) have been constrained to keep within the cash allocation.

The tension has continued for 70 years. We end up with the UK NHS being very cost-effective (one measure of efficiency) with a large activity being delivered within a certain defined budget. On the other hand other European countries have higher expenditure, more hospital beds, more doctors and other staff than the UK. They also generally have better overall health outcome measures.

The NHS seems to be inadequately funded, but there is also a serious recruitment problem, of doctors and nurses in particular. This restricts the opportunities of spending any additional funding.

Will the UK NHS change to having an insurance base? The answer is "No". Although as in 1911 it would be popular with health care providers, it would be an enormous change and it would be strongly resisted by government and the Treasury in particular. 

The NHS is extremely popular with the public. No doubt it will continue despite its faults and permanent underfunding.