The aisle in the pharmacy promising cellulite cures, fat burning tablets or weightloss supplements always surprises me. There are so many products all claiming to have the miracle cure. Everyone loves a quick fix, but do these products actually work?
A dimple by any other name
Cellulite is the term for dimpling of the skin, usually located on the hips, buttocks and thighs, resembling orange peel texture. It is often associated with rapid weight gain, but can be found in both overweight and normal BMI people. It is more common in women, and generally appears after puberty. While not a serious medical condition, its associated stigma can cause distress in many people. Hence, the plethora of anti-cellulite creams.
Coffee with cream?
Most anti-cellulite creams contain agents that stimulate and tighten the skin, to reduce the dimpling appearance, while others target fat cells. One study compared a placebo (or dummy cream) with an anti-cellulite cream containing caffeine and ginger root extract, amongst other things. The investigators found no physical differences but respondents reported high satisfaction with the product, meaning that they thought it had worked.
The jury is still out on whether these creams do actually work. There are a huge number of studies out there and the results are inconclusive. Some studies have shown results to the surface of the skin, or to fat cells, while others have not.
In the studies that show a difference, it appears that these creams only work temporarily. Also, it is not known if the active ingredients of the creams actually reach their targets or a just absorbed by the epidermis.
One article wrote that the current theory of the industry is that more is better when it comes to active ingredients, and so the creams will often contain ‘skin firming’ ingredients, caffeine, as well as ‘slimming’ ingredients. In many cases, the positive benefit felt by users is increased skin softness and smoothness, not a reduction of cellulite.
To be perfectly honest, I was quite surprised by the amount of literature available on anti-cellulite creams! Am I sold on the evidence that the creams work? I’m not sure. It appears that many of the active ingredients can reduce the size of fat cells, or tighten skin, but whether these are lasting effects, we don’t know.
It is important to consider that a quick fix is just that, a temporary quick fix that doesn’t address the underlying problem. Although cellulite can be genetic and be found on people with a low BMI, it is exacerbated by fat tissue. This means that lifestyle decisions can impact cellulite. At the end of the day, diet and exercise are a more long-term solution to cellulite reduction.
One article concluded:
Treatment modalities for cellulite range from topical creams to invasive procedures, such as liposuction.
There is no single treatment of cellulite that is completely effective.
Future treatment options for cellulite depend upon our understanding of the molecular basis and hormonal influences of cellulite adipose tissue.
Kahn et al, 2010.
As long as these products do not contain dangerous ingredients, and improve people’s outlook, what is the harm? The bigger problem is that often these products make claims that do not have the evidence to support them. In terms of the pharmaceutical industry, rigorous studies are being conducted, however many of the purely cosmetic products are unregulated and unverified, so I guess the final thought is this: pick your products wisely and assume that they may not work, nor have hard science behind them!
Draelos, ZD. Science and the Validation of Cosmeceutical Formulations. Journal of cosmetic dermatology. , 2014, Vol.13(3), p.167
Khan, Misbah H., et al. “Treatment of cellulite: Part II. Advances and controversies.” Journal of the American Academy of Dermatology 62.3 (2010): 373-384.
Previously, I wrote about defying ageing with a focus on miracle creams and lotions. The general consensus is that you cannot stop the ageing process.
However, recently an article popped up in my inbox discussing about how the blood of young micecan rejuvenate older mice. Cue images of horror movies where older people are harvesting young people for their blood!
In reality it is far more complicated than injecting the blood of a young person. We need to know how the young blood factors are acting to ‘rejuvenate’.
The potential of umbilical chord blood
In this particular study the older mice who had received plasma from the umbilical cord blood (UCB) of young mice had more neural connections forming, and showed improved memory and learning compared to control mice.
The researchers found that there was expression of a UCB-specific protein in the hippocampus of the older mice who had received the UCB plasma.
Previously, studies have only been able to demonstrate the ‘rejuvenating’ effects of young blood on older animals through a technique called parabiosis, which is where the circulatory system of two mice are joined (ewww!). Obviously, ethically there would be issues in humans, and in animal research it is a proof-principle technique that is also not overly practical. So knowing that we are able to identify factors in the plasma that can ‘rejuvenate’, is a big win.
UCB can also repair damaged tissue
This same year, another article demonstrated that stem cells isolated from human UCB can prevent kidney failure in rats suffering from acute kidney injury. Currently, human UBC cells are used to treat a range of diseases such as
Blood diseases such as Aplastic and Fanconi Anaemia
Metabolic storage diseases
It is undeniable that there are properties of young blood that can ‘defy the ageing process’. In terms of medical research, it seems that these factors will be able to counteract age-related memory loss, and promote repair to damaged organs. Unfortunately, UCB relies on tissues being donated, and has obvious limitations as well as ethical considerations. At the moment these experiments are ‘proof-of-principle’ but pave the way for more UCB-factors to be isolated that may help promote tissue rejuvenation. Think repairing damaged spinal chords!
And, let’s face it, eventually the cosmetic industry will jump on this band wagon to promise ‘age-defying’ treatments!!
Many hospitals collect human umbilical chord blood. Please consider donating your child’s umbilical chord blood and tissue for medical research or to be used in life-saving treatments.
In the 6 months leading to the end of my contract as a postdoc, and in my search for employment, I experienced a range of emotions that were not unlike the 5 stages of grief. First I was in denial, then I progressed through bargaining, anger, depression and finally towards, acceptance. Writing this piece was cathartic, but I also think that it is important to discuss the mental health of researchers….
You can grieve for a career
How is it I can grieve for a lack of employment? In actual fact it is more than possible, it makes sense. Grieving is a natural response to loss, and just as we can grieve for the loss of a loved one, we can grieve for a loss of self-identity, self-worth and our place in the world.
Faced with an ending contract, the prospect of a lack of financial security, and the fact that I am a foreigner with visa requirements, I threw myself head first into finding work. I somewhat naively (given I had worked for many years as a research assistant and had seen first hand the plight of the postdoc) thought that with my 15+ years’ research experience and a decent number of first author publications, I would be inundated with responses!
What followed was email silence. So I told myself that maybe I was applying a little too early, and that people were not interested in my applications because I was still employed. Denial. I convinced myself that these were the reasons and that I still would not have a problem finding a new job.
While often the bargaining stage occurs after denial, it can also occur early on in the grieving process. Bargaining often comes in the form of a promise to change an action or behaviour. For me, the bargaining stage was a period of great productivity fuelled by desperation, as well as a period of guilt. I felt guilty that I had obviously (in my mind) not taken advantage of opportunities presented to me. So I reasoned that if I invested more in X, Y and Z, I would improve my chances of employment. I undertook a part-time Masters Degree, I started my blog, and I emailed every contact I had no matter how tenuous the link. I asked people for advice and went to networking and career events.
I transitioned to the anger phase quickly. I was angry with everyone who was happy with their job. I was angry with people who had permanent contracts and took it for granted, at people who didn’t care about their work, at those who did not take advantage of career enhancing opportunities. I was angry at a lack of career mentorship. I cried all the time out of frustration. The slightest thing would set me off. Then there were the roadblocks to career advancement. For example, being told that I was too old to do another a postdoc and therefore not eligible for many fellowships (despite only being 30-something!).
This naturally progressed into the depression stage. For those facing or experiencing unemployment, scholars have found that self-worth, self-doubt of one’s abilities and place in society, their ability to provide an income along with financial security, is the driving force of the depression stage. I also felt shame that I was unable to find a job as a researcher, that I am disappointing the people who have given me opportunities.
However, a chance networking event showed me I could look outside the box. This helped my transition into acceptance.
What needs to be spoken more often is that even if you don’t work in a lab, it doesn’t mean that you aren’t a scientist. Rather than fighting against what is happening and further wallowing in self-pity, I have come to the conclusion that I am trying the best I can. It is as simple as that. My lack of unemployment is a reflection of the status quo in academia and research, and unfortunately, common. What we also need to remember is that there is no shame in looking for career alternatives that still utilise hard-wrought scientific skills!
This piece was originally written for the blog section of a newspaper, but they have asked me to write about something different so I decided to publish it here. Although this is a very personal piece, I think it is important to discuss how unemployment affects your mental health, and to maybe put my somewhat erratic mood swings into perspective! I didn’t write this to gain sympathy, but to put a voice to a common situation.
On Saturday April 22nd, I participated in the March for Science. I was expecting, given it was an election weekend in France, not be many people would march. I was proven wrong, and it was great to see that the march had a good turnout!
Even though the March for Science originated in the US in response to funding cuts for research, the sentiment has been echoed around the world. Researchers everywhere, including Europe and Australia, are facing reduced funding, reduced support and a lack of recognition for the hard work they do.
Being a scientist is not a stable, long term career by any stretch of the imagination. Yet we persist with it out of passion, and out of understanding that society will not move forward, nor will issues such as (gasp) climate change be tackled, if we don’t have researchers. Thus, the need for continued funding.
So maybe each country, and even each researcher had a different reason for marching on the 22nd, but I for one was glad that people were motivated to do it, and for others to see just how many scientists there actually are!
Images of the March for Science (Paris)
The images shown are from the March for Science in Paris. Thanks to Rebecca Whelan and Rachel Macmaster for the photos.
Warning: if you do not like spiders, or are squeamish, maybe don’t read this post!
When I was at university, I found a red bump on my elbow that progressed to an actual hole. Many doctor’s visits and anti-inflammatory steroid injections later, I had an impressive scar and perhaps, an impressive story.
A persistent myth
My doctor told me that the hole was the result of a white-tailed spider (Lampona cylindrata and Lampona murina) bite, which causes tissue necrosis. Anyone in Australia has heard about people being bitten by a white-tailed spider and ending up requiring multiple skin grafts, or in the worse case scenario, amputation! In actual fact, spider bite-induced necrosis (necrotic arachnidism) is linked to only one spider, the Brown recluse (Loxosceles reclusa), which is found in the southcentral and southeastern areas of the United States. A compound found in the spider venom creates an acute immune response that results in inflammation-driven tissue destruction.
The link between the white-tailed spider and tissue necrosis is in fact an urban legendthat has persisted since the 1980s.
So if the white-tailed spider doesn’t actually cause tissue necrosis, how did I get a hole in my elbow?
The jury is still out
The theories put forward focus on mycobacterium ulcerans infection at the bite sites resulting in an ulcer, or Staphylococcus aureus infection resulting in cellulitis (bacterial skin infection).
It is unlikely that the majority of the cases are the result of a M. ulcerans infection. Firstly, this type of infection is predominantly localised to tropical areas, and is a highly contagious infection. Secondly, studies have shown that the white-tailed spider venom does not carry this bacterium.
The second theory, that the tissue necrosis is from S. aureus infection resulting in cellulitis, is more likely. I couldn’t find a straightforward answer, but it seems that most researchers and clinicians feel that the S. aureus infection occurs from entering at the site of broken skin, i.e. a bite site that someone has scratched.
So, despite a lack of evidence linking the white-tailed spider to necrotic arachnidism, the myth persists. I mean, what is going to have viewers glued to their TV or clicking on links:
“I lost my leg to a spider bite!” or, “I scratched a spider bite and now I have a bacterial infection!”
Tip: don’t enter tissue ulcer into Google images if you are of a weak constitution…!
This post was inspired by a recent post in Australian Geographic.
I recently saw a documentary at the Palais de Tokyo as part of their exhibition entitled “All watched over by machines of loving grace.” The documentary, by BBC journalist Adam Curtis, was a fascinating insight into systems theory, cybernetics and ecology.
So of course, I took to the trusty scholarly search engines to find out more.
A (vicious) circle
Early scholars of the movement described nature as an electrical circuit, with amplifiers and dampeners of the natural order. In terms of ecology, systems theory described nature as a self-governing machine that responded to changes in the environment and adjusted to maintain a natural balance. In essence, an ordered cycle of life.
This is called a feedback loop, i.e there is a cause and an effect. Following on from this, there can be another factor that then influences the original input.
No, I’m not talking about robots!
Cybernetics is at the heart of systems theory, describing nature as a system that can be controlled and managed. Cybernetics considers nature in the bigger picture, looking at the response of the environment to changes.
Cybernetics introduced the concept of ‘negative feedback’, where in order to maintain equilibrium, where the output result that feeds back into the network is out of equilibrium, and is reduced to maintain the steady state.
Earth as a spaceship
Cybernetics spawned the early environmental movement in the 1970s. This was based on the modelling of the ecological feedback loops. Scholars and activists realised that if a steady-state of ecological systems could not be maintained, irreversible damage or a catastrophe would occur.
This produced the idea of the earth as a spaceship. A self-contained object that required all systems to exist and work in harmony in order to maintain a sustainable environment within the ‘spaceship’. If not, water, air, or food would be compromised. In fact, cybernetics also contributed to the development of the Doomsday Clock. This is a metaphorical countdown to the end of the world based on the (dis)equilibrium of the population and our environment.
It’s not just science fiction
Systems theory feedback loops are used in everything from psychology (understanding people’s responses to the environment around them), to machine learning and computers and, to the development of the internet.
The most fascinating focus of the documentary was the realisation that man’s reliance on machines in order to ‘improve’ our quality of life as well as increase productivity in industry, has destroyed the idea of an ecological cybernetic system. The early theorists failed to anticipate that the negative feedback loop would not adjust to a rapidly changing human population, one that was at disequilibrium with its environment. This can be seen in the rapid extinction of animal and plant species, as well as the wealth of some countries versus the absolute poverty of their neighbours.
It really was such an interesting documentary, and I urge you all to watch it (link included in first section).
Bernard C. Patten and Eugene P. Odum. The American Naturalist, Vol. 118, No. 6 (Dec., 1981), pp. 886-895
As we all know, yesterday, March 8th was International Women’s Day. But what exactly does this day mean? Does it mean writing a social media post thanking your mum, discussing gender inequality in the workplace, or attending rallys’ and protests? Yes, if it raises awareness. Me, I watched the live stream of the EU Prize for Women Innovators.
Awarding female innovators
EU commissioner Carlos Moedas spoke that there were two reasons a prize like this is important: 1) to recognise the achievement of women, and 2) talk about female role models. As he said, in the history of the Nobel Prize, only 5% of the recipients have been women.
My interpretation of this is that we need a day to recognise and award the achievements of women outside of, and separate to, the achievements of men. Maybe some people are scoffing at this statement. However, as long as the majority of the awards in Innovation and Science go to men, separate recognition of women is needed.
During his speech, Commissioner Moeda commented that the irony of him ‘commanding women to inspire other women’, was not lost on him. He also said he hoped that women in innovation would “make their passion contagious to other women….and let it inspire another generation of women”.
I ask you, who are your role models? Are they male or female? I tried to think of who my role models were. And do you know what? None really came to mind. Does this mean that I never had a role model, or does it mean that there was no mentoring and fostering of passions and interests? I’m not sure, but after watching the live stream and hearing about the awardees, I hope that there is a new generation of women who are inspired by female role models.
I digress, I want to discuss the award ceremony.
From the 147 applicants, 12 finalists were selected. Of these 12, 4 awards were given. The first award was the ‘rising innovator award for women under 30′. The recipient was Kristina Tsvetanova. She is an engineer and the cofounder of a tactile tablet for people who are visually impaired.
Third place for the EU prize for Women Innovators was awarded to Claudia Gärtner. She has developed a ‘lab-on-a-chip’ that can be used to detect cancer or infectious diseases and other agents from a blood sample.
Secondplace went to Petra Wadström. She and her team have designed a solar device that heats and sterilises water!
And finally, first place was awarded to Michela Magas, who described herself as a member of the creative/tech industries, and has been involved in bringing together researchers with the designers, musicians and developers to bridge the gap between academia and industry and the arts and sciences.
In her speech, she stated (and I am paraphrasing) that ‘the role of the female perspective in innovation is driven by an attempt to understand human nature’. So in other terms, women bring a different perspective to innovation.
And if we are going to talk about role models, then her final statement of ‘what you have inside you can lift you over walls and across borders” was truly inspiring.
To hear about the achievements of these women was inspiring, for the lack of a better word, and it made me want to try harder at what I am doing so that I may be a role model for the next generation of women. Even if it is only my nieces that I inspire!
And that, my friends, is what these awards, and International Women’s Day, is all about. Empowering women with the knowledge that we can aspire, achieve and receive recognition for what we do, and are trying to do.
As a result of growth in areas such as education, scientific knowledge and the progress of industry, society has seen an enhancement of life and culture. However with these changes, a problem of domination has arisen. This is when one person uses or withholds information and knowledge from another person in order gain control.
The heart of the emergingtheory (consisting of the post modernism and critical analyses) is that organisations need to be flexible and less structured in order to change with society. Scholars write that post modernism uses knowledge, information and language to create a culture where the language can be used to either empower or to dominate.
The feminist critique of communication
In honour of International Women’s Day, I am going to discuss a sub-genre of the post modernism approach, the feminist critique. This field challenges and questions the ideological and cultural perceptions of female roles in society, and how communication shapes and influences women’s roles within organisations.
Historically, patriarchal dominance has been used in organisations to engender women to particular roles, (for example, secretaries or nurses) positions seen as “women’s work” and perhaps beneath that which a man should perform. Key to this dominance and perpetuation of gender bias is the language used.
Further to this is the structured hierarchy of an organisation, women in more subservient positions, men in positions of power or positions involving decision-making and the relay of information.
Communication to dominate
One study examined taking maternity leave in regards to changes of identity and how workplace interactions affected leave choices. This study highlighted the problem within organisations to attach meaning and identities to the pregnant woman, often to their detriment.
Central to this was the communication used, as there were differences between what was said and what was done. By this I mean that communication between the women and their supervisors and their co-workers, was used as a means of controlling the decisions made by the women regarding their decision to take maternity leave. It was also found that the language used by the supervisors affected the attitude of co-workers to the women taking or returning from maternity leave.
The communication processes were often used to make the women feel guilt, shame and inferiority about taking leave. It was also used to convince both the women and their co-workers that their work performance would be inferior or less productive based on the decision to take or return from maternity leave.
Communication to empower
Here I am going to focus on an example concerning female dairy farmers in rural India, where researchers studied how breaking down patriarchal dominance and empowering women influenced social change within the communities.
Traditionally in these communities men dictated the control of money, interpersonal relationships and the distribution of work. But some villages were part of a program that was designed to provide female dairy farmers with greater education about dairying, running a co-op, and encouraged social clubs to increase interpersonal interactions.
As you would expect, changing the communication processes and empowering women benefited everyone!
Not only was more information about dairying, health and finances exchanged between the women, men in these villages said that there was a positive effect on the collaborative approach to dairying as well as in their family life!
In short, it was evident that when the women were given a voice, the whole village not only benefited but also underwent social changes. In contrast, more isolated women who were not in the social clubs felt less empowered and still felt they were under patriarchal control.
Communication, how it used, delivered and what is said, has the ability to empower or dominate, affect attitudes, culture and identity, and to create social changes to the benefit of all.
What we do not say can be as powerful as what we actually say.
It might be surprising to know that communication, that is, how we communicate, what we say (even when we aren’t saying it) and how the communication is used, is quite a complicated field of study. In the next few posts that I am calling “The Communication Series”, I discuss the theories and analyses of communication.
Communication theories attempt to describe and give purpose to the way that the communication processes occur and have advanced, as well as attempting to suggest ways to improve communication by highlighting limitations.
These theories are generally applied to organisations where there are clear structural and power differences and communication can either enhance or impair an organisations success.
What you talkin’ about?
At the heart of communication is discourse, which encompasses the information and knowledge being relayed. Having said that, communication is not just a means in which information is moved between individuals, but it is a way of reinforcing and establishing ideas, ethics, structure, ethos as well as output and productivity.
Who you talkin’ to?
If we take a business as an example, effective communication is critical for its interaction with employees/team members as well as with the environment outside of the organisation. The communication is therefore essential to its success.
Continuing with the ‘business’ scenario, the communication can be between peers on the same hierarchal level, managers to employees, or boards of directors to managers. Outside of the business, it can be by customer feedback, profit, the ability of the organisation to expand, marketing/public image, or how the organisation compares with others within the same industry/field.
What did you just say?
What is important to remember is that communication is not just the act of saying words, but can also be from responding to stimuli or by the interpretation of facial expressions and behaviour. And let us not forget that it can also be electronically delivered, such as on a blog, for example….
If how and what we say can change, as well as the interpretation of the message, it demonstrates that communication is an ongoing, changing process. For effective communication to occur, incorporating the varying nature of communication is crucial. If we go back to the business scenario, how an organisation understands these changes and implements them to create new environments can define the organisation, i.e. the means and processes by which individuals within the organisation communicate in order to work together.
The many theory phenomena
Not every organisation is structured similarly, meaning the ways in which they communicate are vastly different. For example, how does communication work in organisations that are hierarchical versus organisations that are collaborative? How do the organisations tackle social and cultural changes, and how do they use communication to incorporate these changes? Hence, just as there are different styles of communication and organisational structures, there are also different theories that can be applied to how communication works within these organisations.
The three main theories are functional, centred and emerging.
“The Communication Series Theories”
The functional theory can be described as performance based, focusing on how messages move through an organisation. It focuses on how rules and regulations resulting in output and yield, shape the communication. This theory focuses on structure, and does not apply well to changing methods of communication and culture.
The centred (or meaning-centred) approach asks how symbolism, stories and emotions are used to construct social structures and personal relationships. This approach encourages incorporating change and the ever-changing nature of communication.
Emerging communication theory focuses on newer and more critical theories that are being applied to communication. In the following posts I will discuss two to of these newer theories – critical and post-modernism.
All sources used throught “The Communication Series” will be placed in the final post. However if you are genuinely interested in a source, send me a message!
When I was young, I used to ask my mum what it was like when she was a child. Her response of ‘I don’t remember it was so long ago’ always astounded me. I was convinced that I would remember everything! Flash forward, and while I have some strong, distinct memories from my childhood, much of it is gone, just like my mum’s. So what are memories? How are they formed and stored and, how do we lose them?
Memory is the retention of knowledge. Both neuroscientists and physiologists agree that this is a broad term covering different aspects of knowledge accumulation. In a general sense, this covers whether the knowledge is purely emotional, linked to a time and place, or if it is related to environmental stimuli.
Much of what has been gleaned about memory comes from medical conditions in which people cannot retain memory or demonstrate memory loss. For example, in individuals with Alzheimer’s, it has been demonstrated that the hippocampus region of the brain is necessary to memory formation. It has been found that there are certain proteins in the hippocampus that are targeted by beta-amyloid peptides (small proteins that are found in the brain tissue of individuals with Alzheimer’s) that result in memory loss. Restoring the levels of these proteins in mouse models of Alzheimer’s restores the ability to learn and remember.
The hippocampus has been shown to be integral in the formation of episodic memories. An episodic memory is one which recall is via the stimuli of a place and/or time. New episodic memories can use the ‘parameters’ of a previous episodic memory and retrieval can involve thoughts and emotions of other memories. This can be why one place or emotion can trigger a multitude of memories! This has been shown experimentally from imaging of the brain. The area of the brain involved in performing an activity associated with a particular place was the same area used to conduct recall of an episodic memory associated with the same location. It has also been demonstrated that stimulation of the hippocampus produces a similar neural response to novel stimuli.
I will remember for ever and ever
So how is it that we fail to remember a conversation we had yesterday but can recall the phone number of the first house we ever lived in?
This comes down to short-term memory versus long-term memory.
Short-term memory is often also referred to as working memory (WM), and is retained for approximately 15-30 seconds. These memories are in a readily available state and usually apply to a task being performed. Repetition of the task or repeated exposure to the stimuli shunts the memory to long-term recall.
A process termed long term potentiation (LTP), is the persistent strengthening of neural cell structures called synapses in response to recent repeated activity. These synapses also exhibit plasticity, a term for the ability of synapses to weaken or strengthen in response to increases or decreases in activity. Memories that ‘fade’ are a phenomenon that neuroscientists call memory extinction, where a conditioned response is forgotten as older memories are replaced with new experiences.
Physiologists have demonstrated that dopamine plays a role in memory formation, in particular short-term memory. Neurons in the hippocampus that are receptive to dopamine will respond rapidly to novel stimuli, but as the stimuli become more familiar, the cells no longer respond. And, interfering with dopamine can block LTP, while making cells more receptive to dopamine enhances LTP.
As it is known that there are also learned responses based on both reward and behaviour, how are memory systems (i.e. WM vs LTP vs reward-based memory formation) recruited?
It is the general consensus that in the case of dopamine and LTP, it is only for stimuli that will be behaviourally advantageous. For other memory systems, recruitment of a system is based on the anticipated demands of a memory and can involve a feedback mechanism that predicts the outcome from interaction with stimuli.
This is where it can become a little confusing! The different parts of the brain control different memory systems. As discussed, the hippocampus is involved in LTP while the prefrontal cortex, for example, is involved in the maintenance and manipulation of WM.
One study demonstrated that if an individual was distracted or had increased delay between memory recall during a task that required WM, LTP increased with a decline in WM accuracy. The authors concluded that the anticipation of increased difficulty in completing or performing WM tasks led to a shift away from WM in order to preserve high-level performance.
It’s in the genes?
Very little is known about the biology of memory. Studies into Alzheimer’s have yielded much of the information about proteins that are crucial to retaining memories.
Neuroscientists combine the memory tests with investigating what is going on at the genomic level, and have found that different genes are activated along with differences in protein production. This can depend on the memory system. However, it is increasingly obvious that epigenetics plays a very important role. Modifications to DNA and proteins that change their activity without changing the genetic or protein code are rapid and occur in response to environmental stimuli. Furthermore, these changes are plastic, which as discussed, is important to memory formation and retention. This is an exciting area of research, with much more to come!
It has been demonstrated that diet can affect memory. In particular, a high fat diet (HFD) can result in poor memory retention, and in animal studies, disrupts learning and performance. It is known that a HFD results in insulin resistance of cells in the hippocampus that impairs insulin signalling.
One study observed that mice on a HFD exhibited reduced exploration time of a novel object, and when re-introduced to the object, spent more time investigating the new object compared to control diet mice. These results indicated that both WM and LTP were affected by the HFD. When the diets were changed, the effects on memory were reversed. Food for thought?!
Liar, liar pants on fire!
The demonstrated plasticity of memory formation and recall can also result in false memories. A false memory is the recall of experiencing something that wasn’t actually experienced.
There are two types of false memories, misinformation-formed and spontaneous false memory formation.
Studies have demonstrated that children seem particularly susceptible to spontaneous false memory formation, while in adults sleep deprivation can be a cause. If a person is sleep-deprived at the time of being presented with the stimuli and later provided with misinformation, their recall of the events can be different to what occurred.
This has also been demonstrated in individuals who exhibit ‘total recall’. These people have an ability to recall memories in rich detail unaided by mnemonics or memory aids. Memory recall from these individuals can also be corrupted by misinformation or misleading suggestions.
With all that we have learned about memory formation and retention, what about age-related memory loss?
Age-related memory loss is associated with a reduction in the activity of genes involved in plasticity, degradation or loss of neurons, and decreased plasticity. The hippocampus in particular appears to exhibit age-related decay that can lead to a loss of autobiographical recall.
However, not all memory systems are affected by age. One study showed that there were not age-related differences in the ability to learn configural tasks, but that there were delayed response times i.e. older adults repeated the tasks more slowly. The older adults did show a deficit in recalling tasks associated with newly learned episodic memories, with higher false memory recall. This was further confounded if several cues could initiate the retrieval of a memory.
However, all is not lost. A recent study demonstrated that the injection of blood from young mice could counteract ageing at the molecular, structural, functional and cognitive levels in the hippocampus of aged mice!
While the authors observed these changes, they had no data to explain why and how the changes occurred. They cited that it was possibly ‘pro-youthful’ factors that promote regeneration of decaying tissue or affect the activity of ‘pro-ageing’ factors. Current literature suggests that stem cells in the young mouse blood may play a role.
Stress is also linked to poor memory retention and recall, as is a lack of sleep.
While it appears much is known about memory, it is acknowledged that there is still a long way to go to understand the brain and memories. Unfortunately, progress is generally made by understanding how something has gone wrong.