We all have to make hard decisions from time to time. The hardest of my life was whether or not to change research fields after my PhD, from fundamental physics to climate physics. I had job offers that could have taken me in either direction – one to join Stephen Hawking’s Relativity and Gravitation Group at Cambridge University, another to join the Met Office as a scientific civil servant.
I wrote down the pros and cons of both options as one is supposed to do, but then couldn’t make up my mind at all. Like Buridan’s donkey, I was unable to move to either the bale of hay or the pail of water. It was a classic case of paralysis by analysis.
Since it was doing my head in, I decided to try to forget about the problem for a couple of weeks and get on with my life. In that intervening time, my unconscious brain decided for me. I simply walked into my office one day and the answer had somehow become obvious: I would make the change to studying the weather and climate.
But I remain fascinated by what was going on in my head back then, which led my subconscious to make a life-changing decision that my conscious could not. Is there something to be understood here not only about how to make difficult decisions, but about how humans make the leaps of imagination that characterise us as such a creative species? I believe the answer to both questions lies in a better understanding of the extraordinary power of noise.
I went from the pencil-and-paper mathematics of Einstein’s theory of general relativity to running complex climate models on some of the world’s biggest supercomputers. Yet big as they were, they were never big enough – the real climate system is, after all, very complex.
In the early days of my research, one only had to wait a couple of years and top-of-the-range supercomputers would get twice as powerful. This was the era where transistors were getting smaller and smaller, allowing more to be crammed on to each microchip. The consequent doubling of computer performance for the same power every couple of years was known as Moore’s Law.
This story is part of Conversation Insights The Insights team generates long-form journalism and is working with academics from different backgrounds who have been engaged in projects to tackle societal and scientific challenges.
There is, however, only so much miniaturisation you can do before the transistor starts becoming unreliable in its key role as an on-off switch. Today, with transistors starting to approach atomic size, we have pretty much reached the limit of Moore’s Law. To achieve more number-crunching capability, computer manufacturers must bolt together more and more computing cabinets, each one crammed full of chips.
But there’s a problem. Increasing number-crunching capability this way requires a lot more electric power – modern supercomputers the size of tennis courts consume tens of megawatts. I find it something of an embarrassment that we need so much energy to try to accurately predict the effects of climate change.
That’s why I became interested in how to construct a more accurate climate model without consuming more energy. And at the heart of this is an idea that sounds counterintuitive: by adding random numbers, or “noise”, to a climate model, we can actually make it more accurate in predicting the weather.
A constructive role for noise
Noise is usually seen as a nuisance – something to be minimised wherever possible. In telecommunications, we speak about trying to maximise the “signal-to-noise ratio” by boosting the signal or reducing the background noise as much as possible. However, in nonlinear systems, noise can be your friend and actually contribute to boosting a signal. (A nonlinear system is one whose output does not vary in direct proportion to the input. You will likely be very happy to win £100 million on the lottery, but probably not twice as happy to win £200 million.)
Noise can, for example, help us find the maximum value of a complicated curve such as in Figure 1, below. There are many situations in the physical, biological and social sciences as well as in engineering where we might need to find such a maximum. In my field of meteorology, the process of finding the best initial conditions for a global weather forecast involves identifying the maximum point of a very complicated meteorological function.
However, employing a “deterministic algorithm” to locate the global maximum doesn’t usually work. This type of algorithm will typically get stuck at a local peak (for example at point a) because the curve moves downwards in both directions from there.
An answer is to use a technique called “simulated annealing” – so called because of its similarities with (annealing), the heat treatment process that changes the properties of metals. Simulated annealing, which employs noise to get round the issue of getting stuck at local peaks, has been used to solve many problems including the classic travelling salesman puzzle of finding the shortest path between a large number of cities on a map.
Figure 1 shows a possible route to locating the curve’s global maximum (point 9) by using the following criteria:
If a randomly chosen point is higher than the current position on the curve, then the new point is always moved to.
If it is lower than the current position, the suggested point isn’t necessarily rejected. It depends whether the new point is a lot lower or just a little lower.
However, the decision to move to a new point also depends on how long the analysis has been running. Whereas in the early stages, random points quite a bit lower than the current position may be accepted, in later stages only those that are higher or just a tiny bit lower are accepted.
The technique is known as simulated annealing because early on – like hot metal in the early phase of cooling – the system is pliable and changeable. Later in the process – like cold metal in the late phase of cooling – it is almost rigid and unchangeable.
How noise can help climate models
Noise was introduced into comprehensive weather and climate models around 20 years ago. A key reason was to represent model uncertainty in our ensemble weather forecasts – but it turned out that adding noise also reduced some of the biases the models had, making them more accurate simulators of weather and climate.
Unfortunately, these models require huge supercomputers and a lot of energy to run them. They divide the world into small gridboxes, with the atmosphere and ocean within each assumed to be constant – which, of course, it isn’t. The horizontal scale of a typical gridbox is around 100km – so one way of making a model more accurate is to reduce this distance to 50km, or 10km or 1km. However, halving the volume of a gridbox increases the computational cost of running the model by up to a factor of 16, meaning it consumes a lot more energy.
Here again, noise offered an appealing alternative. The proposal was to use it to represent the unpredictable (and unmodellable) variations in small-scale climatic processes like turbulence, cloud systems, ocean eddies and so on. I argued that adding noise could be a way of boosting accuracy without having to incur the enormous computational cost of reducing the size of the gridboxes. For example, as has now been verified, adding noise to a climate model increases the likelihood of producing extreme hurricanes – reflecting the potential reality of a world whose weather is growing more extreme due to climate change.
The computer hardware we use for this modelling is inherently noisy – electrons travelling along wires in a computer move in partly random ways due to its warm environment. Such randomness is called “thermal noise”. Could we save even more energy by tapping into it, rather than having to use software to generate pseudo-random numbers? To me, low-energy “imprecise” supercomputers that are inherently noisy looked like a win-win proposal.
But not all of my colleagues were convinced. They were uncomfortable that computers might not give the same answers from one day to the next. To try to persuade them, I began to think about other real-world systems that, because of limited energy availability, also use noise that is generated within their hardware. And I stumbled on the human brain.
Noise in the brain
Every second of the waking day, our eyes alone send gigabytes of data to the brain. That’s not much different to the amount of data a climate model produces each time it outputs data to memory.
The brain has to process this data and somehow make sense of it. If it did this using the power of a supercomputer, that would be impressive enough. But it does it using one millionth of that power, about 20W instead of 20MW – what it takes to power a lightbulb. Such energy efficiency is mind-bogglingly impressive. How on Earth does the brain do it?
An adult brain contains some 80 billion neurons. Each neuron has a long slender biological cable – the axon – along which electrical impulses are transmitted from one set of neurons to the next. But these impulses, which collectively describe information in the brain, have to be boosted by protein “transistors” positioned at regular intervals along the axons. Without them, the signal would dissipate and be lost.
The energy for these boosts ultimately comes from an organic compound in the blood called ATP (adenosine triphosphate). This enables electrically charged atoms of sodium and potassium (ions) to be pushed through small channels in the neuron walls, creating electrical voltages which, much like those in silicon transistors, amplify the neuronal electric signals as they travel along the axons.
With 20W of power spread across tens of billions of neurons, the voltages involved are tiny, as are the axon cables. And there is evidence that axons with a diameter less than about 1 micron (which most in the brain are) are susceptible to noise. In other words, the brain is a noisy system.
If this noise simply created unhelpful “brain fog”, one might wonder why we evolved to have so many slender axons in our heads. Indeed, there are benefits to having fatter axons: the signals propagate along them faster. If we still needed fast reaction times to escape predators, then slender axons would be disadvantageous. However, developing communal ways of defending ourselves against enemies may have reduced the need for fast reaction times, leading to an evolutionary trend towards thinner axons.
Perhaps, serendipitously, evolutionary mutations that further increased neuron numbers and reduced axon sizes, keeping overall energy consumption the same, made the brain’s neurons more susceptible to noise. And there is mounting evidence that this had another remarkable effect: it encouraged in humans the ability to solve problems that required leaps in imagination and creativity.
Perhaps we only truly became Homo Sapiens when significant noise began to appear in our brains?
Putting noise in the brain to good use
Many animals have developed creative approaches to solving problems, but there is nothing to compare with a Shakespeare, a Bach or an Einstein in the animal world.
How do creative geniuses come up with their ideas? Here’s a quote from Andrew Wiles, perhaps the most famous mathematician alive today, about the time leading up to his celebrated proof of the maths problem (misleadingly) known as Fermat’s Last Theorem:
When you reach a real impasse, then routine mathematical thinking is of no use to you. Leading up to that kind of new idea, there has to be a long period of tremendous focus on the problem without any distraction. You have to really think about nothing but that problem – just concentrate on it. And then you stop. [At this point] there seems to be a period of relaxation during which the subconscious appears to take over – and it’s during this time that some new insight comes.
This notion seems universal. Physics Nobel Laureate Roger Penrose has spoken about his “Eureka moment” when crossing a busy street with a colleague (perhaps reflecting on their conversation while also looking out for oncoming traffic). For the father of chaos theory Henri Poincaré, it was catching a bus.
And it’s not just creativity in mathematics and physics. Comedian John Cleese, of Monty Python fame, makes much the same point about artistic creativity – it occurs not when you are focusing hard on your trade, but when you relax and let your unconscious mind wander.
Of course, not all the ideas that bubble up from your subconscious are going to be Eureka moments. Physicist Michael Berry talks about these subconscious ideas as if they are elementary particles called “claritons”:
Actually, I do have a contribution to particle physics … the elementary particle of sudden understanding: the “clariton”. Any scientist will recognise the “aha!” moment when this particle is created. But there is a problem: all too frequently, today’s clariton is annihilated by tomorrow’s “anticlariton”. So many of our scribblings disappear beneath a rubble of anticlaritons.
Here is something we can all relate to: that in the cold light of day, most of our “brilliant” subconscious ideas get annihilated by logical thinking. Only a very, very, very small number of claritons remain after this process. But the ones that do are likely to be gems.
In his renowned book Thinking Fast and Slow, the Nobel prize-winning psychologist Daniel Kahneman describes the brain in a binary way. Most of the time when walking, chatting and looking around (in other words when multitasking), it operates in a mode Kahneman calls “system 1” – a rather fast, automatic, effortless mode of operation.
By contrast, when we are thinking hard about a specific problem (unitasking), the brain is in the slower, more deliberative and logical “system 2”. To perform a calculation like 37x13, we have to stop walking, stop talking, close our eyes and even put our hands over our ears. No chance for significant multitasking in system 2.
My 2015 paper with computational neuroscientist Michael O’Shea interpreted system 1 as a mode where available energy is spread across a large number of active neurons, and system 2 as where energy is focused on a smaller number of active neurons. The amount of energy per active neuron is therefore much smaller when in the system 1 mode, and it would seem plausible that the brain is more susceptible to noise when in this state. That is, in situations when we are multitasking, the operation of any one of the neurons will be most susceptible to the effects of noise in the brain.
Berry’s picture of clariton-anticlariton interaction seems to suggest a model of the brain where the noisy system 1 and the deterministic system 2 act in synergy. The anticlariton is the logical analysis that we perform in system 2 which, most of the time, leads us to reject our crazy system 1 ideas.
But sometimes one of these ideas turns out to be not so crazy.
This is reminiscent of how our simulated annealing analysis (Figure 1) works. Initially, we might find many “crazy” ideas appealing. But as we get closer to locating the optimal solution, the criteria for accepting a new suggestion becomes more stringent and discerning. Now, system 2 anticlaritons are annihilating almost everything the system 1 claritons can throw at them – but not quite everything, as Wiles found to his great relief.
The key to creativity
If the key to creativity is the synergy between noisy and deterministic thinking, what are some consequences of this?
On the one hand, if you do not have the necessary background information then your analytic powers will be depleted. That’s why Wiles says that leading up to the moment of insight, you have to immerse yourself in your subject. You aren’t going to have brilliant ideas which will revolutionise quantum physics unless you have a pretty good grasp of quantum physics in the first place.
But you also need to leave yourself enough time each day to do nothing much at all, to relax and let your mind wander. I tell my research students that if they want to be successful in their careers, they shouldn’t spend every waking hour in front of their laptop or desktop. And swapping it for social media probably doesn’t help either, since you still aren’t really multitasking – each moment you are on social media, your attention is still fixed on a specific issue.
But going for a walk or bike ride or painting a shed probably does help. Personally, I find that driving a car is a useful activity for coming up with new ideas and thoughts – provided you don’t turn the radio on.
When making difficult decisions, this suggests that, having listed all the pros and cons, it can be helpful not to actively think about the problem for a while. I think this explains how, years ago, I finally made the decision to change my research direction – not that I knew it at the time.
Because the brain’s system 1 is so energy efficient, we use it to make the vast majority of the many decisions in our daily lives (some say as many as 35,000) – most of which aren’t that important, like whether to continue putting one leg in front of the other as we walk down to the shops. (I could alternatively stop after each step, survey my surroundings to make sure a predator was not going to jump out and attack me, and on that basis decide whether to take the next step.)
However, this system 1 thinking can sometimes lead us to make bad decisions, because we have simply defaulted to this low-energy mode and not engaged system 2 when we should have. How many times do we say to ourselves in hindsight: “Why didn’t I give such and such a decision more thought?”
Of course, if instead we engaged system 2 for every decision we had to make, then we wouldn’t have enough time or energy to do all the other important things we have to do in our daily lives (so the shops may have shut by the time we reach them).
From this point of view, we should not view giving wrong answers to unimportant questions as evidence of irrationality. Kahneman cites the fact that more than 50% of students at MIT, Harvard and Princeton gave the incorrect answer to this simple question – a bat and ball costs $1.10; the bat costs one dollar more than the ball; how much does the ball cost? – as evidence of our irrationality. The correct answer, if you think about it, is 5 cents. But system 1 screams out ten cents.
If we were asked this question on pain of death, one would hope we would spend enough thought to come up with the correct answer. But if we were asked the question as part of an anonymous after-class test, when we had much more important things to spend time and energy doing, then I’d be inclined to think of it as irrational to give the right answer.
If we had 20MW to run the brain, we could spend part of it solving unimportant problems. But we only have 20W and we need to use it carefully. Perhaps it’s the 50% of MIT, Harvard and Princeton students who gave the wrong answer who are really the clever ones.
Just as a climate model with noise can produce types of weather that a model without noise can’t, so a brain with noise can produce ideas that a brain without noise can’t. And just as these types of weather can be exceptional hurricanes, so the idea could end up winning you a Nobel Prize.
So, if you want to increase your chances of achieving something extraordinary, I’d recommend going for that walk in the countryside, looking up at the clouds, listening to the birds cheeping, and thinking about what you might eat for dinner.
So could computers be creative?
Will computers, one day, be as creative as Shakespeare, Bach or Einstein? Will they understand the world around us as we do? Stephen Hawking famously warned that AI will eventually take over and replace mankind.
However, the best-known advocate of the idea that computers will never understand as we do is Hawking’s old colleague, Roger Penrose. In making his claim, Penrose invokes an important “meta” theorem in mathematics known as Gödel’s theorem, which says there are mathematical truths that can’t be proven by deterministic algorithms.
There is a simple way of illustrating Gödel’s theorem. Suppose we make a list of all the most important mathematical theorems that have been proven since the time of the ancient Greeks. First on the list would be Euclid’s proof that there are an infinite number of prime numbers, which requires one really creative step (multiply the supposedly finite number of primes together and add one). Mathematicians would call this a “trick” – shorthand for a clever and succinct mathematical construction.
But is this trick useful for proving important theorems further down the list, like Pythagoras’s proof that the square root of two cannot be expressed as the ratio of two whole numbers? It’s clearly not; we need another trick for that theorem. Indeed, as you go down the list, you’ll find that a new trick is typically needed to prove each new theorem. It seems there is no end to the number of tricks that mathematicians will need to prove their theorems. Simply loading a given set of tricks on a computer won’t necessarily make the computer creative.
Does this mean mathematicians can breathe easily, knowing their jobs are not going to be taken over by computers? Well maybe not.
I have been arguing that we need computers to be noisy rather than entirely deterministic, “bit-reproducible” machines. And noise, especially if it comes from quantum mechanical processes, would break the assumptions of Gödel’s theorem: a noisy computer is not an algorithmic machine in the usual sense of the word.
Does this imply that a noisy computer can be creative? Alan Turing, pioneer of the general-purpose computing machine, believed this was possible, suggesting that “if a machine is expected to be infallible then it cannot also be intelligent”. That is to say, if we want the machine to be intelligent then it had better be capable of making mistakes.
Others may argue there is no evidence that simply adding noise will make an otherwise stupid machine into an intelligent one – and I agree, as it stands. Adding noise to a climate model doesn’t automatically make it an intelligent climate model.
However, the type of synergistic interplay between noise and determinism – the kind that sorts the wheat from the chaff of random ideas – has hardly yet been developed in computer codes. Perhaps we could develop a new type of AI model where the AI is trained by getting it to solve simple mathematical theorems using the clariton-anticlariton model; by making guesses and seeing if any of these have value.
For this to be at all tractable, the AI system would need to be trained to focus on “educated random guesses”. (If the machine’s guesses are all uneducated ones, it will take forever to make progress – like waiting for a group of monkeys to type the first few lines of Hamlet.)
For example, in the context of Euclid’s proof that there are an unlimited number of primes, could we train an AI system in such a way that a random idea like “multiply the assumed finite number of primes together and add one” becomes much more likely than the completely useless random idea “add the assumed finite number of primes together and subtract six”? And if a particular guess turns out to be especially helpful, can we train the AI system so that the next guess is a refinement of the last one?
If we can somehow find a way to do this, it could open up modelling to a completely new level that is relevant to all fields of study. And in so doing, we might yet reach the so-called “singularity” when machines take over from humans. But only when AI developers fully embrace the constructive role of noise – as it seems the brain did many thousands of years ago.
For now, I feel the need for another walk in the countryside. To blow away some fusty old cobwebs – and perhaps sow the seeds for some exciting new ones.
Columnist Peggy Noonan wrote that Santos was “a stone cold liar who effectively committed election fraud.”
And now Santos has taken the dramatic step of removing himself temporarily from the committees he’s been assigned to: the House Small Business Committee and the Science, Space and Technology Committee. The Washington Post reports Santos told his GOP colleagues that he would be a “distraction” until cleared in several probes of his lies.
Santos’ lies may have gotten him into hot water with the voters who put him in the House, and a few of his colleagues, including the New York GOP, want him to resign. CBS News reported that federal investigators are looking at Santos’ finances and financial disclosures.
But the bulk of Santos’ misrepresentations may be protected by the First Amendment. The U.S. Supreme Court has concluded that lies enjoy First Amendment protection – not because of their value, but because the government cannot be trusted with the power to regulate lies.
In other words, lies are protected by the First Amendment to safeguard democracy.
So how can unwitting voters be protected from sending a fraud to Congress?
Any attempt to craft a law aimed at the lies in politics will run into practical enforcement problems. And attempts to regulate such lies could collide with a 2012 Supreme Court case United States v. Alvarez.
The Supreme Court rejected the government’s argument that lies should not be protected by the First Amendment. The court concluded that lies are protected by the First Amendment unless there is a legally recognized harm, such as defamation or fraud, associated with the lie. So the Stolen Valor Act was struck down as an unconstitutional restriction on speech. The court pointed out that some false statements are “inevitable if there is to be open and vigorous expression of views in public and private conversation.”
Crucially, the court feared that the power to criminalize lies could damage American democracy. The court reasoned that unless the First Amendment limits the power of the government to criminalize lies, the government could establish an “endless list of subjects about which false statements are punishable.”
In Alvarez, the Supreme Court expressed concern about laws criminalizing lies in politics. It warned that the Stolen Valor Act applied to “political contexts, where although such lies are more likely to cause harm,” the risk that prosecutors would bring charges for ideological reasons was also high.
The court believed that the marketplace of ideas was a more effective and less dangerous mechanism for policing lies, particularly in politics. Politicians and journalists have the incentives and the resources to examine the records of candidates such as Santos to uncover and expose falsehoods.
The story of George Santos, though, is a cautionary tale for those who hold an idealized view of how the marketplace of ideas operates in contemporary American politics.
Democracy has not had a long run when measured against the course of human history. From the founding of the American republic in the late 18th century until the advent of the modern era, there was a rough division of labor. Citizens selected leaders, and experts played a critical gatekeeping role, mediating the flow of information.
The election of George Santos illustrates the challenges facing American democracy. The First Amendment was written in an era when government censorship was the principal danger to self-government. Today, politicians and ordinary citizens can harness new information technologies to spread misinformation and deepen polarization. A weakened news media will fail to police those assertions, or a partisan news media will amplify them.
Justice Stephen Breyer’s concurring opinion argued that a different test should be used. Courts, Breyer said, should assess any speech-related harm that might flow from the law as well as the importance of the government objective and whether the law furthers that objective. This is known as intermediate scrutiny or proportionality analysis. It is a form of analysis that is widely used by constitutional courts in other democracies.
Intermediate scrutiny or proportionality analysis does not treat all government regulations of speech as presumptively unconstitutional. It forces courts to balance the value of the speech against the justifications for the law in question. That is the right test, Justice Breyer concluded, when assessing laws that penalize “false statements about easily verifiable facts.”
The two approaches will lead to different results when governments seek to regulate lies. Even proposed, narrowly written laws aimed at factual misrepresentations by politicians about their records or about who won an election might not survive the high degree of protection afforded lies in the United States.
Intermediate scrutiny or proportionality analysis, on the other hand, will likely enable some government regulation of lies – including those of the next George Santos – to survive legal challenge.
Democracies have a better long-term survival track record than dictatorships because they can and do evolve to deal with new dangers. The success of America’s experiment in self-government may well hinge, I believe, on whether the country’s democracy can evolve to deal with new information technologies that help spread falsehoods that undermine democracy.
Miguel Schor does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Having not long finished the festive season and commenced a new year, many of us take the moments shortly after to reflect on the year that was, and also consider what we would like to see in the year ahead.
With that in mind I thought I would take a look back at 2022. Last year will likely be remembered for three key events, firstly it generally saw the world exit the pandemic cloud of COVID-19. Secondly, we saw the commencement of the war between Ukraine and Russia. Thirdly, we saw inflation return with a vengeance being quickly followed by one of the fastest tightening cycles in history by Central Banks. The official cash rate in Australia increased from 0.1 per cent in April 2022 to 3.10 per cent by the December meeting of the RBA.
This mix of events led to one of the strongest risk-off years we have seen since the Global Financial Crisis, there are few places for investors to find sanctuary with losses occurring across both growth and defensive assets alike.
Investor sentiment was broadly very negative during 2022, it is always a challenge for any growth (or risk) asset to perform well when the market doesn’t have an appetite for risk of any kind.
If we look specifically at the Australian small ordinaries index, its return for the calendar year of 2022 was negative 20.7 per cent. To give this context the ASX 100 declined by 3.9 per cent and the ASX 300 return was negative 6.1 per cent. It is fair to say that the risk on trade in Small Companies over the last few years moved into reverse in 2022. This was a consequence of the above macro factors, coupled with a more bearish investor and market.
What to consider for 2023?
In moving to our outlook for 2023 we need to initially consider the 3 points above and ask where we see them today and where they may evolve to over the next 12 months. With the final question being what impact this will have on equity market performance?
As a starting point it is fair to say that the impact of COVID-19 continues to pass and become more of a memory than a current issue. Even China who were the final strong hold have now moved to accept an existence with COVID-19 and they cope with re-opening and reintegrating with the rest of the world. As it stands today, we would expect the impact of COVID-19 to continue to diminish from here. One datapoint that has been interesting to follow over the pandemic has been a UBS Composite Supply Chain Indicator which is now in a strong downward trend and moving closer to pre-pandemic levels.
A second example where we can see this is in spot indexes for international container freight costs which are now off roughly 80 per cent from their peak 18 months ago. This is interesting as it was one of the early contributors to the increase in inflation. As a leading cause it is positive to see this returning to more normal levels.
Next, we move to the war in Ukraine, which continues to grind along, and will no doubt continue to influence energy prices and broader speculation. Having said that, although the outcome is unknown, this is what at times in markets is called a known unknown. We are all aware of what is happening, many governments and countries are working around it. This is best seen in Europe where they continue to diversify their sources and supply of Energy, along with continued fast tracking of non-Russian dependent infrastructure. Short of a shock surprise, this event we can largely say has been priced into markets.
Finally, inflation and interest rates have been a biproduct of the above two events. These two arguably caused the most disruption to equity markets in 2022. At the time of writing the most recent inflation data for Australia was released last week and came out higher than consensus expectations with trimmed inflation (removing the most volatile items) coming in at 6.5 per cent against an expectation of 6.1 per cent.
At this point most major market commentators have the belief that we are likely to see two further interest rate increases in the first quarter of this year. Post this timeframe the speculation begins to grow; a portion believe inflation is going to be more stubborn and require further effort from Central Banks. Other market commentators believe that the remaining two expected rate increases will be sufficient to manage inflation, particularly given the delayed transmission mechanism we have here due to the nature of Fixed Rates and their term to reset.
Some also believe we may see interest rates start to fall in late 2023, which would become a tailwind for equities, in particular some of the growth names which had the toughest performance over 2022.
What can we expect from small caps?
Looking through all of this noise and to our outlook for Australian Small Companies for 2023 we think as always the starting point is important. At a market level we started 20 per cent cheaper than the same point in the prior year. Further to this we have seen some earnings downgrades in some parts of the market, where others have proven to be far more resilient than expected. Sectors like the Resource sector managed to grow their earnings over 2022. So in some pockets, we find valuations from a fundamental perspective to be very attractive.
While there is a belief that interest rates have further to go, we still see some significant risks in the more speculative parts of the market. This is mainly around companies that will have little control over their earnings power in the next 12 months, or are less mature and as a result less capable to weather the economic conditions ahead. Increasing interest rates are also not favourable for building stocks, or some consumer stocks (although some of the high-quality names will be resilient and based on valuation look interesting).
Any companies that missed the market’s expectations on earnings were punished, if the company had to go as far as an earnings downgrade the market showed little mercy. We think this trend will likely continue into the February 2023 reporting season. These are risks we are aiming to avoid by assessing the quality of our investments and their earnings streams.
Looking further out, there is an argument that Australian Small Companies offer a significant investment opportunity for investors over 2023 if they wish to add some risk to their portfolios. They were the most sold down part of the market in 2022 so the valuation of this sleeve of the market looks attractive.
History tells us that once the economy has reached peak inflation, the peak of interest rates is usually not too much further into the future. If we do in fact only see two further rate rises from the RBA and inflation is contained then we will start to have a foundation that would be more solid and look to underpin a backdrop that would be conducive to a rally in equity markets.
As always we are not out of the woods and do expect some earnings challenges to come to the fore in February’s interim reporting season. Stock selection and active management will be critical to navigate this.
Should we see an improved outlook and also a reduction in interest rates later in the year we may start to see an improvement in investor and market sentiment. This is likely the final ingredient needed to see capital flows return more strongly to equities and in particular Small Companies.
Overall we continue to have a meaningful exposure to the Resource sector as we think that with China reopening and supply shortages still an issue for Europe in the medium term, coupled with the continued drive of decarbonisation that 2023 should be another supportive year for the sector.
We have quality exposures to structural growth companies that over a medium term investment horizon represent excellent value and are growing quality businesses. We believe we are closer to the end than the beginning of the inflation and interest rate story which over the course of 2023 we think will provide a favourable foundation for the market and the Montgomery Small Companies Fund.
LAWRENCE — As humanity tries to find its footing after the COVID-19 pandemic, the University of Kansas is taking steps to help ready the United States and the rest of the world for future global health crises.
Credit: A. Townsend Peterson
LAWRENCE — As humanity tries to find its footing after the COVID-19 pandemic, the University of Kansas is taking steps to help ready the United States and the rest of the world for future global health crises.
A. Townsend Peterson, a University Distinguished Professor of Ecology & Evolutionary Biology and curator of ornithology at the KU Biodiversity Institute and Natural History Museum, is part of a team of researchers that earned funding from the National Science Foundation to establish the International Center for Avian Influenza Pandemic Prediction and Prevention, dubbed “ICAIP3.”
The mission of the new multi-institutional center is to tackle grand challenges in global health with a focus on avian-influenza pandemic prediction and prevention. Most famously, the 1918 flu pandemic showed influenza viruses that start off in birds can kill millions of humans. But avian influenza, or “bird flu,” has triggered outbreaks around the world in recent years that killed billions of poultry and wild birds, as well as hundreds of people.
“The COVID-19 pandemic has been a wake-up call for the world, highlighting the importance of investing in public health and the basic science underpinnings of public health,” Peterson said. “It has had a scale of economic and public health impact that is unparalleled in our lifetime. This center would have ongoing viral monitoring around the world, but particularly in regions that tend to give rise to pandemic flu strains. We would have a predictive understanding of which types of new bird flu strains have pandemic potential. You can imagine the value of monitoring wild bird populations and seeing all the standing variation in flu viruses, and being able to say, ‘Hey, this one virus — this is what we need to watch.’”
The ICAIP3 center will be supported by the Predictive Intelligence for Pandemic Preparedness (PIPP) initiative, part of the NSF’s efforts to understand the science behind pandemics and build the ability to prevent and respond to future outbreaks.
“We need to be thinking big-picture when it comes to pandemics,” Peterson said. “COVID-19 is just one example of many diverse pandemics that have occurred throughout history. The Spanish flu, the plague pandemics, typhoid fever and avian influenza are all examples of diseases that have had a significant impact on human health and the economy. We need to be proactive in our approach to understanding and preventing these types of outbreaks, rather than waiting for them to happen and scrambling to respond.”
The total award for the PIPP project is roughly $1 million. Aside from KU, the ICAIP3 project has partners at the University of Oklahoma, where the work is headquartered, as well as the U.S. Geological Survey, the University of California-Berkeley and the World Health Organization Collaborating Centre for Studies on the Ecology of Influenza in Animals and Birds with St. Jude Children’s Research Hospital.
Peterson said the collaborators aim to apply for additional funding once ICAIP3 has succeeded as a proof-of-concept during its initial 18-month phase, structured to align with the PIPP aim to explore ideas for later competition for center-level funding.
The team will work to establish ongoing viral monitoring around the world, focusing most on regions that historically give rise to pandemic flu strains. The goal is to build understanding of the types of new strains holding pandemic potential and help predict and prevent outbreaks in coming decades.
Peterson and his collaborators will test available computer models that track “spillover,” where a disease can spread between animal species (“reservoir-poultry spillover” happens when wild birds give a disease to chickens, for example). Next, the team will work to improve these modelling approaches and run spillover simulations.
“If we do this well, what will come out is a model of the geographic, operational and individual-scale behavior of a pandemic-potential virus,” Peterson said. “Part of that potential is — does it stay just in one place? Or does it spread? If it does spread, does it take years, or does it spread in days?”
In essence, the KU researcher likened the work to devising an early-warning system to benefit researchers and public health officials as they decide where to devote resources for maximum effect.
With avian influenzas, part of this work must incorporate data about birds’ migratory patterns.
“You get some early warning of an outbreak going on and you say, ‘Okay, we’re pretty sure it’s a specific hypothetical virus — now, what are its most likely patterns of behavior?’” Peterson said. “How quickly will it leak from wild birds into domestic birds? If it’s coming from Asia, where would we expect it to appear in the U.S.? If you had this thing spread in the summer and get up to Siberia, then the jump may be way down into the U.S. because some of those birds think eastern Siberia is western Alaska and migrate south into the Americas in the fall. We would have a model that’s far better than what we have right now.”
Along with integrating huge amounts of disparate data into improved computer models, the collaboration will aim to build a community of researchers around a “One-Health (Human-Animal-Environment Systems) approach” they said is needed take on “the complexity, dynamics and the tele-coupling of HAES across multiple spatial and temporal scales and organization levels.” Peterson said he hoped the work also would strengthen the nation’s ability to track disease in birds and other species, as well as safeguard public health and prevent societal disruption.
“What in our lifetime has had the scale of economic and public health impact compared to COVID-19?” Peterson said. “Maybe 9/11, if you could count the war efforts after that. We’re too young to have lived through the World Wars, which probably were on the same scale here in America. But what, since then — can you think of anything? If you want a stronger America, you make an America that has a strong public health system that can respond to socially driven health threats like vaccine hesitancy. Measles was gone, polio was gone, but now they’re popping up in communities that are less well-vaccinated. And we’ll see more mosquito-borne diseases — like West Nile virus, Zika, chikungunya and dengue — all of which have recently emerged in the U.S. and each in a very different way.”