Connect with us

International

Which New World Order Are We Talking About?

Which New World Order Are We Talking About?

Authored by Jeff Thomas via InternationalMan.com,

Those of us who are libertarians have a tendency…

Published

on

Which New World Order Are We Talking About?

Authored by Jeff Thomas via InternationalMan.com,

Those of us who are libertarians have a tendency to speak frequently of “the New World Order.”

When doing so, we tend to be a bit unclear as to what the New World Order is.

Is it a cabal of the heads of the world’s governments, or just the heads of Western governments?

Certainly bankers are included somewhere in the mix, but is it just the heads of the Federal Reserve and the IMF, or does it also include the heads of JPMorgan, Goldman Sachs, etc.?

And how about the Rothschilds? And the Bundesbank—surely, they’re in there, too?

And the list goes on, without apparent end.

Certainly, all of the above entities have objectives to increase their own power and profit in the world, but to what degree do they act in concert? Although many prominent individuals, world leaders included, have proclaimed that a New World Order is their ultimate objective, the details of who’s in and who’s out are fuzzy. Just as fuzzy is a list of details as to the collective objectives of these disparate individuals and groups.

So, whilst most libertarians acknowledge “the New World Order,” it’s rare that any two libertarians can agree on exactly what it is or who it’s comprised of. We allow ourselves the luxury of referring to it without being certain of its details, because, “It’s a secret society,” as evidenced by the Bilderberg Group, which meets annually but has no formal agenda and publishes no minutes. We excuse ourselves for having only a vague perception of it, although we readily accept that it’s the most powerful group in the world.

This is particularly true of Americans, as Americans often imagine that the New World Order is an American construct, created by a fascist elite of US bankers and political leaders. The New World Order may be better understood by Europeans, as, actually, it’s very much a European concept—one that’s been around for quite a long time.

It may be said to have had its beginnings in ancient Rome. As Rome became an empire, its various emperors found that conquered lands did not automatically remain conquered. They needed to be managed—a costly and tedious undertaking. Management was far from uniform, as the Gauls could not be managed in the same manner as the Egyptians, who in turn, could not be managed like the Mesopotamians.

After the fall of Rome, Europe was in many ways a shambles for centuries, but the idea of “managing” Europe was revived with the Peace of Westphalia in 1648. The peace brought an end to the Thirty Years’ War (1618-1648) in the Holy Roman Empire and the Eighty Years’ War (1568-1648) between Spain and the Dutch Republic. It brought together the Holy Roman Empire, The House of Habsburg, the Kingdoms of Spain and France, the Dutch Republic, and the Swedish Empire.

Boundaries were set, treaties were signed, and a general set of assumptions as to the autonomy within one’s borders were agreed, to the partial satisfaction of all and to the complete satisfaction of no one… Sound familiar?

Later, Mayer Rothschild made his name (and his fortune) by becoming the financier to the military adventures of the German Government. He then sent his sons out to England, Austria, France, and Italy to do the same—to create a New World Order of sorts, under the control of his family through national debt to his banks. (Deep Throat was right when he said, “Follow the Money.”)

So, the concept of a New World Order has long existed in Europe in various guises, but what does this tell us about the present and, more important, the future?

In our own time, we have seen presidents and prime ministers come and go, whilst their most prominent advisors, such as Henry Kissinger and Zbigniew Brzezinski, continue from one administration to the next, remaining advisors for decades. Such men are often seen as the voices of reason that may be the guiding force that brings about a New World Order once and for all.

Mister Brzezinski has written in his books that order in Europe depends upon a balance with Russia, which must be created through the control of Ukraine by the West. He has stated repeatedly that it’s critical for this to be done through diplomacy, that warfare would be a disaster. Yet, he has also supported the US in creating a coup in Ukraine. When Russia became angered at the takeover, he openly supported American aggression in Ukraine, whilst warning that Russian retaliation must not be tolerated.

Henry Kissinger, who has literally written volumes on his “pursuit of world peace” has, when down in the trenches, also displayed a far more aggressive personality, such as his angry recommendation to US President Gerald Ford to “smash Cuba” when Fidel Castro’s military aid to Angola threatened to ruin Mr. Kissinger’s plans to control Africa.

Whilst the most “enlightened” New World Order advisors may believe that they are working on the “Big Picture,” when it comes down to brass tacks, they clearly demonstrate the same tendency as the more aggressive world leaders, and reveal that, ultimately, they seek to dominate. They may initially recommend diplomacy but resort to force if the other side does not cave to “reason” quickly.

If we stand back and observe this drama from a distance, what we see is a theory of balance between the nations of Europe (and, by extension, the whole world)—a balance based upon intergovernmental agreements, allowing for centralised power and control.

This theory might actually be possible if all the countries of the world were identical in every way, and the goals of all concerned were also identical. But this never has been and can never be the case. Every world leader and every country will differ in its needs and objectives. Therefore, each may tentatively agree to common conditions, as they have going back to the Peace of Westphalia, yet, even before the ink has dried, each state will already be planning to gain an edge on the others.

In 1914, Europe had (once again) become a tangle of aspirations of the various powers—a time bomb, awaiting only a minor incident to set it off. That minor incident occurred when a Serbian national assassinated an Austrian crown prince. Within a month, Europe exploded into World War. As Kissinger himself has observed in his writings, “[T]hey all contributed to it, oblivious to the fact that they were dismantling an international order.”

Since 1648, for every Richelieu that has sought to create a New World Order through diplomacy, there has been a Napoleon who has taken a militaristic approach, assuring that the New World Order applecart will repeatedly be upset by those who are prone to aggression.

Further, even those who seek to operate through diplomacy ultimately will seek aggressive means when diplomatic means are not succeeding.

A true world order is unlikely.

What may occur in its stead would be repeated attempts by sovereign states to form alliances for their mutual benefit, followed by treachery, one- upmanship, and ultimately, aggression. And very possibly a new World War.

But of one thing we can be certain: Tension at present is as great as it was in 1914. We are awaiting only a minor incident to set off dramatically increased international aggression. With all the talk that’s presently about as to a New World Order, what I believe will occur instead will be a repeat of history.

If this belief is correct, much of the world will decline into not only external warfare, but internal control. Those nations that are now ramping up into police states are most at risk, as the intent is already clearly present. All that’s needed is a greater excuse to increase internal controls. Each of us, unless we favour being engulfed by such controls, might be advised to internationalise ourselves—to diversify ourselves so that, if push comes to shove, we’re able to get ourselves and our families out of harm’s way.

*  *  *

Unfortunately, there’s little any individual can practically do to change the course of these trends in motion. The best you can and should do is to stay informed so that you can protect yourself in the best way possible, and even profit from the situation. That’s precisely why bestselling author Doug Casey just released Surviving and Thriving During an Economic Collapse an urgent new PDF report. It explains what could come next and what you can do about it so you don’t become a victim. Click here to download it now.

Tyler Durden Wed, 10/04/2023 - 03:30

Read More

Continue Reading

International

Insilico Medicine presents data on AI-designed cancer drugs at 3 major cancer conferences

Clinical stage artificial intelligence (AI) drug discovery company Insilico Medicine (“Insilico”) has been invited to present scientific data on its…

Published

on

Clinical stage artificial intelligence (AI) drug discovery company Insilico Medicine (“Insilico”) has been invited to present scientific data on its novel anti-cancer assets at three major upcoming cancer conferences — the European Society for Medical Oncology (ESMO) conference in Madrid Oct. 20-24, 2023; the Society of Immunotherapy of Cancer (SITC) conference Nov. 1-5, 2023 in San Diego; and the San Antonio Breast Cancer Symposium (SABCS) Dec. 5-9, 2023. 

Credit: Insilico Medicine

Clinical stage artificial intelligence (AI) drug discovery company Insilico Medicine (“Insilico”) has been invited to present scientific data on its novel anti-cancer assets at three major upcoming cancer conferences — the European Society for Medical Oncology (ESMO) conference in Madrid Oct. 20-24, 2023; the Society of Immunotherapy of Cancer (SITC) conference Nov. 1-5, 2023 in San Diego; and the San Antonio Breast Cancer Symposium (SABCS) Dec. 5-9, 2023. 

Small molecule oncology target inhibitors represent the largest part of Insilico’s therapeutic pipeline portfolio, which includes 31 programs across 29 targets. The Company recently entered into a licensing deal with Exelixis on its potentially best-in-class generative AI-designed USP1 inhibitor for BRCA-mutant tumors for $80m upfront and additional milestone payments and tiered royalties. That drug is currently in a Phase I clinical trial. 

“Using our AI platform, we have been able to advance a number of anti-cancer therapeutics that use new mechanisms to stop tumor growth and cancer progression, including two in clinical stage,” says Sujata Rao, MD, Chief Medical Officer at Insilico Medicine. Dr. Rao has extensive experience in clinical oncology practice and over 15 years in pharma leading global clinical development for cancer drugs. “Driven by a strategy of focusing on novelty, confidence, and commercial tractability, and designed to meet the high unmet medical needs of patients, we have developed a number of promising anti-cancer assets and look forward to presenting to the leading cancer conferences.”

Dr. Rao showcased four of the Company’s novel AI-designed cancer inhibitors at the most recent Association for Cancer Research (AACR) annual meeting.

Insilico’s upcoming cancer conference presentations include:

  • ESMO – Oct. 20-24: At ESMO, Insilico will present data on ISM8207, a novel QPCTL inhibitor for triple negative breast cancer and B-cell non-Hodgkin lymphoma. The small molecule inhibitor, currently being evaluated in a Phase I trial, has demonstrated anti-tumor activity in both hematological tumors and solid tumors in preclinical studies, as well as favorable pharmacokinetics and safety profiles. This molecule is available for licensing partners in the U.S., Europe and Japan. 
  • SITC – Nov. 1-5: At SITC, Insilico will present data on ISM5939, a novel, potent, orally available, selective ENPP1 inhibitor cancer immunotherapy for multiple tumor types that enhances the anti-tumor effects of immune checkpoint inhibitors in syngeneic murine cancer models. It is also being advanced as a possible treatment for Hypophosphatasia.
  • SABCS – Dec. 5-9: At SABCS, Insilico will present data on ISM5043, a novel, selective KAT6 inhibitor for the treatment of advanced ER+/HER2- breast cancer – the most common subtype of breast cancer. Current treatment for patients with advanced or metastatic disease is endocrine therapy in combination with CDK4/6 inhibitors but many patients develop resistance to therapy, indicating a huge unmet medical need. 

Insilico is advancing new therapeutics using generative AI via its proprietary end-to-end Pharma.AI platform for identifying novel targets (PandaOmics), designing new drugs (Chemistry42), and predicting the outcomes of clinical trials (InClinico). The platform has produced four drugs that have reached clinical trials, including a lead drug for the devastating chronic lung disease Idiopathic Pulmonary Fibrosis (IPF), the first AI-discovered and AI-designed drug to advance to Phase II trials with patients. 

“We’re really encouraged by the progress of our diverse pipeline of cancer therapeutics – two of which have progressed into clinical trials,” says Alex Zhavoronkov, PhD, founder and CEO of Insilico Medicine. “Our AI can be thought of as a ‘Google for targets’ that looks at every single small signal from massive datasets all over the world, including our own robotics data. It gives us signals that the target is working in a specific cancer, it already has demonstrated some efficacy, and it is going to be commercially tractable.” 

Dr. Rao and Insilico’s Chief Business Officer Michelle Chen, PhD, along with other business development professionals, will be in attendance at the upcoming conferences. For any interest in licensing or partnerships, please contact: bd@insilicomedicine.com.

 

About Insilico Medicine

Insilico Medicine, a global clinical stage biotechnology company powered by generative AI, is connecting biology, chemistry, and clinical trials analysis using next-generation AI systems. The company has developed AI platforms that utilize deep generative models, reinforcement learning, transformers, and other modern machine learning techniques for novel target discovery and the generation of novel molecular structures with desired properties. Insilico Medicine is developing breakthrough solutions to discover and develop innovative drugs for cancer, fibrosis, immunity, central nervous system diseases, infectious diseases, autoimmune diseases, and aging-related diseases. www.insilico.com


Read More

Continue Reading

International

New polymer membranes, AI predictions could dramatically reduce energy, water use in oil refining

A new kind of polymer membrane created by researchers at Georgia Tech could reshape how refineries process crude oil, dramatically reducing the energy…

Published

on

A new kind of polymer membrane created by researchers at Georgia Tech could reshape how refineries process crude oil, dramatically reducing the energy and water required while extracting even more useful materials.

Credit: Candler Hobbs, Georgia Institute of Technology

A new kind of polymer membrane created by researchers at Georgia Tech could reshape how refineries process crude oil, dramatically reducing the energy and water required while extracting even more useful materials.

The so-called DUCKY polymers — more on the unusual name in a minute — are reported Oct. 16 in Nature Materials. And they’re just the beginning for the team of Georgia Tech chemists, chemical engineers, and materials scientists. They also have created artificial intelligence tools to predict the performance of these kinds of polymer membranes, which could accelerate development of new ones.

The implications are stark: the initial separation of crude oil components is responsible for roughly 1% of energy used across the globe. What’s more, the membrane separation technology the researchers are developing could have several uses, from biofuels and biodegradable plastics to pulp and paper products.

“We’re establishing concepts here that we can then use with different molecules or polymers, but we apply them to crude oil because that’s the most challenging target right now,” said M.G. Finn, professor and James A. Carlos Family Chair in the School of Chemistry and Biochemistry.

Crude oil in its raw state includes thousands of compounds that have to be processed and refined to produce useful materials — gas and other fuels, as well as plastics, textiles, food additives, medical products, and more. Squeezing out the valuable stuff involves dozens of steps, but it starts with distillation, a water- and energy-intensive process.

Researchers have been trying to develop membranes to do that work instead, filtering out the desirable molecules and skipping all the boiling and cooling.

“Crude oil is an enormously important feedstock for almost all aspects of life, and most people don’t think about how it’s processed,” said Ryan Lively, Thomas C. DeLoach Jr. Professor in the School of Chemical and Biomolecular Engineering. “These distillation systems are massive water consumers, and the membranes simply are not. They’re not using heat or combustion. They just use electricity. You could ostensibly run it off of a wind turbine, if you wanted. It’s just a fundamentally different way of doing a separation.”

What makes the team’s new membrane formula so powerful is a new family of polymers. The researchers used building blocks called spirocyclic monomers that assemble together in chains with lots of 90-degree turns, forming a kinky material that doesn’t compress easily and forms pores that selectively bind and permit desirable molecules to pass through. The polymers are not rigid, which means they’re easier to make in large quantities. They also have a well-controlled flexibility or mobility that allows pores of the right filtering structure to come and go over time.

The DUCKY polymers are created through a chemical reaction that’s easy to produce at a scale that would be useful for industrial purposes. It’s a flavor of a Nobel Prize-winning family of reactions called click chemistry, and that’s what gives the polymers their name. The reaction is called copper-catalyzed azide-alkyne cycloaddition — abbreviated CuAAC and pronounced “quack.” Thus: DUCKY polymers.

In isolation, the three key characteristics of the polymer membranes aren’t new; it’s their unique combination that makes them a novelty and effective, Finn said.

The research team included scientists at ExxonMobil, who discovered just how effective the membranes could be. The company’s scientists took the crudest of the crude oil components — the sludge left at the bottom after the distillation process — and pushed it through one of the membranes. The process extracted even more valuable materials.

“That’s actually the business case for a lot of the people who process crude oils. They want to know what they can do that’s new. Can a membrane make something new that the distillation column can’t?” Lively said. “Of course, our secret motivation is to reduce energy, carbon, and water footprints, but if we can help them make new products at the same time, that’s a win-win.”

Predicting such outcomes is one way the team’s AI models can come into play. In a related study recently published in Nature Communications, Lively, Finn, and researchers in Rampi Ramprasad’s Georgia Tech lab described using machine learning algorithms and mass transport simulations to predict the performance of polymer membranes in complex separations.

“This entire pipeline, I think, is a significant development. And it’s also the first step toward actual materials design,” said Ramprasad, professor and Michael E. Tennenbaum Family Chair in the School of Materials Science and Engineering. “We call this a ‘forward problem,’ meaning you have a material and a mixture that goes in — what comes out? That’s a prediction problem. What we want to do eventually is to design new polymers that achieve a certain target permeation performance.”

Complex mixtures like crude oil might have hundreds or thousands of components, so accurately describing each compound in mathematical terms, how it interacts with the membrane, and extrapolating the outcome is “non-trivial,” as Ramprasad put it.

Training the algorithms also involved combing through all the experimental literature on solvent diffusion through polymers to build an enormous dataset. But, like the potential of membranes themselves to reshape refining, knowing ahead of time how a proposed polymer membrane might work would accelerate a materias design process that’s basically trial-and-error now, Ramprasad said.

“The default approach is to make the material and test it, and that takes time. This data-driven or machine learning-based approach uses past knowledge in a very efficient manner,” he said. “It’s a digital partner: You’re not guaranteed an exact prediction, because the model is limited by the space spanned by the data you use to train it. But it can extrapolate a little bit and it can take you in new directions, potentially. You can do an initial screening by searching through vast chemical spaces and make go, no-go decisions up front.”

Lively said he’d long been a skeptic about the ability of machine learning tools to tackle the kinds of complex separations he works with.

“I always said, ‘I don’t think you can predict the complexity of transport through polymer membranes. The systems are too big; the physics are too complicated. Can’t do it.’”

But then he met Ramprasad: “Rather than just be a naysayer, Rampi and I took a stab at it with a couple of undergrads, built this big database, and dang. Actually, you can do it,” Lively said.

Developing the AI tools also involved comparing the algorithms’ predictions to actual results, including with the DUCKY polymer membranes. The experiments showed the AI models predictions were within 6% to 7% of actual measurements.

“It’s astonishing,” Finn said. “My career has been spent trying to predict what molecules are going to do. The machine learning approach, and Rampi’s execution of it, is just completely revolutionary.”

This research was supported by the U.S. Department of Energy, grant No. DE-EE0007888; the European Research Council, grant No. 758370; the Kwanjeong Educational Foundation; a Royal Society University Research Fellowship; and the ExxonMobil Technology and Engineering Company. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any funding agency.


Read More

Continue Reading

International

Study reveals areas of Brazilian Amazon where no ecological research has been done

Many parts of the Brazilian Amazon are neglected in ecological research, for several reasons, according to an article published in the journal Current…

Published

on

Many parts of the Brazilian Amazon are neglected in ecological research, for several reasons, according to an article published in the journal Current Biology. Authored by Joice Ferreira of the Federal University of Pará (UFP) and colleagues from many countries who also belong to the Synergize Consortium, the article identifies the areas missing from ecological research and the factors that have determined these gaps, pinpointing opportunities for the planning of new investments in research in the region.

Credit: Alexander Lees

Many parts of the Brazilian Amazon are neglected in ecological research, for several reasons, according to an article published in the journal Current Biology. Authored by Joice Ferreira of the Federal University of Pará (UFP) and colleagues from many countries who also belong to the Synergize Consortium, the article identifies the areas missing from ecological research and the factors that have determined these gaps, pinpointing opportunities for the planning of new investments in research in the region.

The researchers analyzed data from 7,694 ecological research sites to try to understand how logistics and human influence on the forests could explain the probability of research being done in different parts of the Amazon region. The period analyzed was 2010-20, and the survey covered nine groups of organisms: benthic invertebrates (living on the seabed or in the lowest layers of any water body), heteropterans (true bugs), odonates (dragonflies and damselflies), fish, macrophytes (aquatic plants), birds, woody vegetation, ants, and dung beetles.

“The consortium contacted people who had contributed to databases, standardized inventories and studies involving sampling efforts. Information was thereby compiled on three groups that represent Amazonian biodiversity: vertebrates, invertebrates, and plants in upland forests, flooded forests and aquatic environments – rivers, lakes, etc. This is the first paper published by the group,” said Mario Ribeiro de Moura, a researcher at the State University of Campinas’s Institute of Biology (IB-UNICAMP) in São Paulo, Brazil. He is a co-author of the article and a member of the consortium.

The findings evidenced high susceptibility to climate change by 2050 in 15%-18% of the most neglected areas in the Brazilian Amazon. The least studied areas are also the most threatened in the vicinity of the “deforestation arc”, a swathe of territory extending along the southern, southeastern and eastern borders of Amazonia, mostly in the states of Acre, Amazonas, Maranhão, Mato Grosso, Pará, Rondônia and Tocantins.

The main gaps in Amazonian ecological research were in upland areas. “This was expected and probably reflects the role played by navigable waterways in facilitating access to blackwater and whitewater inundation forest, as well as other aquatic environments,” Moura said.

Not by chance, the least pessimistic scenarios appeared along rivers in northeast Pará and Roraima, southeastern Acre and northern Rondônia. “In these areas, the future impact of climate change will be less severe, and we have more knowledge of the species that live there,” Moura said.

The study was supported by FAPESP via two postdoctoral fellowships in Brazil. One of the fellowships was awarded to Raquel de Carvalho, and the other to Angélica Faria de Resende. Moura was supported by a Young Investigator Grant and a scholarship in Brazil

Research biases

The scientists mapped the most neglected areas of the Amazon region in terms of ecological research and superimposed on this map the areas most likely to be affected by climate change based on a metric they developed to reflect its intensity. Deforestation and degradation data were taken from a recent study published in Science on the drivers of deforestation in the Amazon. The correlations between datasets showed that ecological research in the Amazon is more frequent in deforested areas than areas where deforestation is predicted in the next three decades.

“Environmental change is happening at a very fast pace, including climate change and landscape transformation. To understand how these changes affect biodiversity, we need to know what was in a given area before they happened. The Amazon is one of the last significantly conserved refuges of tropical biodiversity and essential to an understanding of the isolated effect of climate change and habitat destruction on biodiversity,” Moura said. “The study highlighted the areas at risk of environmental change in the coming years and not yet explored by scientists. Without sufficient ecological research, we won’t be able to know what’s changing and what’s being lost.”

With regard to logistics, accessibility and distance to research facilities were key predictors of the probability of research being done. “Access is a mixed blessing, as evidenced by the deforestation arc. Easy access enables researchers to reach more areas, so part of this immense arc has been thoroughly studied, but it also enables those responsible for deforestation and other malefactors to reach these areas. Little information is available on the threatened areas at the edges of the deforestation arc,” Moura said.

Access, and hence research probability, increased with proximity to transportation and research facilities for all upland organisms and most representatives of wetlands and aquatic habitats. “The length of the dry season determines ease of access by water. In flooded forest areas, the shorter the dry season, the easier it is to gain access by river, and this increases the likelihood of research. In upland areas, more severe dry seasons facilitate overland access, with less mud and inundation,” Moura said.

Forest degradation and land tenure were also moderately effective predictors, albeit with consistent importance, across all organism groups. Both factors affected ecological research in the same direction, with research probability slightly declining in more degraded areas and Indigenous territories, but increasing in conservation units. 

In short, less research is done in degraded areas and Indigenous territories, and more in conservation units. “It’s harder to obtain access to Indigenous communities, or there may be a lack of administrative mechanisms that connect researchers with the bodies that regulate such access and with the communities themselves. We need to improve integration between the parties involved, and above all engage local communities in the knowledge creation process. Far more research goes on in conservation units than Indigenous territories, although both are types of protected area,” Moura said.

In Carvalho’s opinion, this is a distribution problem, since Indigenous territories account for some 23% of the total area of the Brazilian Amazon. “At the same time, several Indigenous territories are the best conserved parts of the Amazon biome. It would be very valuable if we could do research there,” she said.

Novel strategies

According to Moura, the Amazon Rainforest is under-represented in global databases used as a source for research on biodiversity. “As noted in the article, we need to integrate the information we have about the Amazon with global databases. The Synergize Consortium has projects that could contribute to global assessments. The information reviewed for this study mostly complies with the requirements of other databases and could be used to improve the representativeness of Amazonian biodiversity in future research on global change. The consortium plans to use this study as a basis for establishing itself as an important collaborative network for other research groups interested in analyzing environmental changes in the Amazon,” he said.

The Synergize Consortium’s principal investigators are Ferreira, who is affiliated with EMBRAPA Amazônia Oriental, a unit of the Brazilian Agricultural Research Corporation (EMBRAPA); and Filipe França, a researcher at the University of Bristol in the United Kingdom. Jos Barlow, a professor at the University of Lancaster, also in the UK, is a co-author of the article and a member of the consortium’s steering committee.

Moura believes the group’s findings can be used to develop novel funding strategies for the Amazon. “Once you’ve identified the gaps, you can target them for investment in conservation and research, or give more weight to research in these areas in future calls for proposals. Public policy and action plans can take these results into consideration, especially as far as biodiversity monitoring and inventorying are concerned,” he said.

About São Paulo Research Foundation (FAPESP)

The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the State of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration. You can learn more about FAPESP at www.fapesp.br/en and visit FAPESP news agency at www.agencia.fapesp.br/en to keep updated with the latest scientific breakthroughs FAPESP helps achieve through its many programs, awards and research centers. You may also subscribe to FAPESP news agency at http://agencia.fapesp.br/subscribe.


Read More

Continue Reading

Trending