Quantified Uncertainty Research Institute

GiveWiki
 Accepting donations
$862.800
7 donations
Support score: 13OrganizationGlobal priorities researchInstitutional decision-makingMetascience

QURI researches systematic practices to specify and estimate the most important parameters for the most important or scalable decisions. Research areas include forecasting, epistemics, evaluations, ontology, and estimation. We emphasize technological solutions that can heavily scale in the next 5 to 30 years.

Why?

We believe that humanity’s success in the next few hundred years will lie intensely on its ability to coordinate and make good decisions. If important governmental and philanthropic bodies become significantly more effective, this will make society far more resilient to many kinds of challenges ahead.

Don’t recent failures of governments demonstrate that all organizational decision making is hopeless?

No. There have been setbacks very recently (Brexit, Trump, COVID challenges), but many successes as well (Many other world leaders, OpenPhil, Gates Foundation). In the long-term, institutions will still matter, and we don’t think they are hopeless.

Do we actually think innovation will help here?

This isn’t clear, but there is promising evidence.

  • The internet is still very young and promising.
  • The recent forecasting literature is highly promising and still very early. See the work around Superforecasting, for example.
  • Technology is just emerging (last ~10 years) to allow people to do complex probabilistic reasoning. See recent probabilistic programming languages and simulation systems.
  • AI is advancing quickly. It’s quite possible it can be useful for strategic decisions if harnessed well.

What kinds of things might QURI do in the next 10 years?

  • Make open source probabilistic tooling for improved analyst decision making.
  • Run quick experiments on systematic forecasting techniques.
  • Offer ongoing services to:
    • Verify the decision making, modeling, and estimation of other groups.
    • Make useful forecasts on-demand.
  • Organize idealistic initiatives to show and manage large sets of forecasts for how the future may go.
  • Help set up a rubric for Effective Altruist projects, a committee to review projects on this rubric, and a financial prize structure for those with projects that get rated well.
  • Help formalize and organize a long list of precise definitions about global risk and risk factors.
  • Write research reports outlining expectations and opportunities in the fields of forecasting and decision making.
  • Experiment intensely for using these practices on QURI itself.
  • Make tooling to identify great forecasters/estimators and forecaster/estimator practices, then go around making sure these are used.
    • Marketing could hypothetically be a huge challenge. There could be entire organizations doing nothing but promoting the Delphi technique.
    • We don’t expect QURI to be over 30% marketing and education in the next few years. We’d try to be cost effective by doing things like coordinating closely with a few groups who could most use our research; including consulting groups that may propagate it further.
  • Paint clear pictures of optimistic and idealistic outcomes for great decision tools.

Select Principles & Heuristics

Longtermism

Optimize for longtermist goals. This includes an emphasis on Effective Altruist projects in particular.

Startup innovation, not scientific innovation.

  • Yes: Small experiments to find exciting things.
  • Yes: Long-term projects on exciting things, with lots of weekly iteration to improve them.
  • No: Fixed-length 1-time projects, like a $2mil forecasting study with a clear end.

We do whatever (ethically) needs to be done

  • If there’s a specific that’s necessary for something to be useful, we will try to find that skill, instead of giving up or assuming a different org will eventually take things from where we leave them.
  • Example: We’re providing forecasting assistance to one group, and we realize that there’s a significant bottleneck for legal advice. We will try to find a legal consultant and work with them to solve the bottleneck.
  • Example: We make a forecasting platform for organizations, but they don’t use it that much. We note this, then organize 3rd party forecasters to forecast the things important to these organizations, without needing significant buy in.

Collaborate and be friendly with related groups.

If we find an exciting initiative but realize a different group is better suited to it, we try to help that group take the initiative.

Eat your own dogfood

Internally use a fair amount of forecasting and decision techniques for QURI decision making. This would be overkill for most organizations, but for us this is experimentation to be built upon in future engagements.

What won’t QURI do?

  • Typical academic research around non-decision-making cause areas, like AI progress.
    • We don’t expect to publish many academic papers, in part because that’s a comparative advantage of many other groups.
  • Focus primarily on ML methods, for the next 5-10 years.
    • We hope that our methods could be greatly augmented by ML methods, and will try to steer them as such. But we don’t expect to compete with OpenAI for ML talent. Most ML work requires lots of non-ML code and methods; we could focus more on those latter parts where ML comes in, then collaborate with more ML based groups later.
  • Run psychology experiments to understand why people often make bad decisions.
    • We’re interested in how smart and sophisticated teams could dramatically improve, and be used to assist other groups.
  • Innovate on highly advanced and specific mathematics and philosophy that will only be used by fewer than 10 academics per year for 20 years or more.
    • We’re interested in techniques that would improve sizable teams that are achievable in the next 3-10 years. We generally can’t assume that team members will have master’s degrees in any area. We can assume that they could have STEM undergraduate degrees or similar, and can spend 1-6 months training.
  • Make tools for decision elicitation for uncalibrated experts
    • This could be a huge area in terms of the challenge and science. I think we can assume that the main estimates will be made from calibrated forecasters, as these will likely outperform experts.
  • Try to greatly popularize rationality or calibration testing
    • Calibration testing right now is fairly specific; we’re interested in finding much larger gains in elite groups later on.
  • Produce work that would likely take 5+ years out to be useful or used.
    • We want to make sure that at least someone is using this work for improved decision making. It’s okay though if this is QURI.
      • For example, we won’t work on scoring functions without a clear idea of how we or someone else may implement them in the next 4 years.
  • Substantial & prolonged governmental consulting.
    • There are already many good consultancies for governments, businesses, and nonprofits. We don’t seek to compete with these. Our ideal would be that these groups incorporate techniques that we help pioneer.
    • We will likely work with specific groups in limited capacities, or partner with other consultancies, in order to try out innovative approaches that can be scaled by other groups.
    • We may do “consulting” with Effective Altruist organizations, which are typically quite meta.
  • $3M+ experiments
    • We're not particularly excited now about pursuing large IARPA contracts and similar. These are often pretty bureaucratic and slow moving. We'd prefer that QURI does lots of small things on quick time scales, or long-lasting things that will exist for 5+ years.
  • Work that would be bad, in expectation, for the long term future of sentient life.
    • It’s possible that these techniques could be useful for authoritarian regimes to become more effective in ways that would be net harmful. We want to best make sure that does not happen.

[This project was created by the GiveWiki team. Please visit the website of the organization for more information on their work.]

0
0