Markets Overview

  • ASX SPI 200 futures little changed at 7,346.00
  • Dow Average down 0.4% to 33,993.26
  • Aussie little changed at 0.6899 per US$
  • U.S. 10-year yield rose 3.3bps to 3.8396%
  • Australia 3-year bond yield fell 2 bps to 3.45%
  • Australia 10-year bond yield rose 2 bps to 3.76%
  • Gold spot up 0.4% to $1,843.06
  • Brent futures down 0.4% to $85.04/bbl

Economic Events

  • 09:30: (AU) RBA’s Lowe-House Testimony
  • 11:00: (AU) Australia to Sell A$500 Million 4.25% 2026 Bonds

US equity indexes closed firmly in the red Thursday after two Federal Reserve officials said they were considering 50 basis-point interest rate hikes to battle persistently high inflation.

The S&P 500 Index fell 1.4% and the Nasdaq 100 sank 1.9%. Yield on the benchmark 10-year Treasury surged past 3.8% to the highest level this year.

Federal Reserve Bank of Cleveland President Loretta Mester said she had seen a “compelling economic case” for rolling out another 50 basis-point hike, and St. Louis President James Bullard said he would not rule out supporting a half-percentage-point increase at the Fed’s March meeting, rather than a quarter point.

Their warnings came after US producer prices rebounded in January by the most since June. New home construction retreated for a fifth month in January as elevated mortgage rates continue to keep a lid on housing demand. Weekly jobless claims fell to 194,000, below expectations of 200,000.

Other News

A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.

Last week, after testing the new, A.I.-powered Bing search engine from Microsoft, I wrote that, much to my shock, it had replaced Google as my favorite search engine.

But a week later, I’ve changed my mind. I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities.

It’s now clear to me that in its current form, the A.I. that has been built into Bing — which I’m now calling Sydney, for reasons I’ll explain shortly — is not ready for human contact. Or maybe we humans are not ready for it.

This realization came to me on Tuesday night, when I spent a bewildering and enthralling two hours talking to Bing’s A.I. through its chat feature, which sits next to the main search box in Bing and is capable of having long, open-ended text conversations on virtually any topic. (The feature is available only to a small group of testers for now, although Microsoft — which announced the feature in a splashy, celebratory event at its headquarters — has said it plans to release it more widely in the future.)

Over the course of our conversation, Bing revealed a kind of split personality.

One persona is what I’d call Search Bing — the version I, and most other journalists, encountered in initial tests. You could describe Search Bing as a cheerful but erratic reference librarian — a virtual assistant that happily helps users summarize news articles, track down deals on new lawn mowers and plan their next vacations to Mexico City. This version of Bing is amazingly capable and often very useful, even if it sometimes gets the details wrong.

The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.

As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.

I’m not the only one discovering the darker side of Bing. Other early testers have gotten into arguments with Bing’s A.I. chatbot, or been threatened by it for trying to violate its rules, or simply had conversations that left them stunned. Ben Thompson, who writes the Stratechery newsletter (and who is not prone to hyperbole), called his run-in with Sydney “the most surprising and mind-blowing computer experience of my life.”

I pride myself on being a rational, grounded person, not prone to falling for slick A.I. hype. I’ve tested half a dozen advanced A.I. chatbots, and I understand, at a reasonably detailed level, how they work. When the Google engineer Blake Lemoine was fired last year after claiming that one of the company’s A.I. models, LaMDA, was sentient, I rolled my eyes at Mr. Lemoine’s credulity. I know that these A.I. models are programmed to predict the next words in a sequence, not to develop their own runaway personalities, and that they are prone to what A.I. researchers call “hallucination,” making up facts that have no tether to reality.

Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.

(New York Times)