Connect with us

Technology

Australia debuts first multi-story 3D printed home – built in just 5 months

Published

on

Australia debuts first multi-story 3D printed home – built in just 5 months

NEWYou can now listen to Fox News articles!

A major milestone in construction has arrived. This time from Western Australia. Contec Australia has completed the nation’s first multi-story 3D concrete printed home. Located in Tapping near Perth, the two-story residence was finished in just five months. Most impressive? The structural walls were 3D printed in only 18 hours of active printing time.

This matters because it points to where housing might be heading here, too. With rising costs, labor shortages and a push for more sustainable building methods, this kind of breakthrough could shape the future of American neighborhoods.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

SUSTAINABLE 3D-PRINTED HOME BUILT PRIMARILY FROM SOIL

Advertisement

Why this build is a game-changer

Contec’s project isn’t just a prototype. It demonstrates how 3D concrete printing can bring major benefits to everyday housing. Compared to traditional masonry construction, the Tapping home achieved:

  • 22% cost savings on structural walls
  • 3x the strength of brick (50MPa vs 15MPa)
  • Faster delivery, with the entire project completed in just five months

Contec Australia prints the final wall of the second level of a multi-story 3D printed home in Perth. (Contec Australia)

And it doesn’t cut corners on durability. The walls are fire-resistant, water-resistant, termite-proof and cyclone rated, features U.S. regions facing hurricanes, floods and wildfires could find especially appealing.

AMERICA’S LESSONS FROM WORLD’S LARGEST 3D-PRINTED SCHOOLS

Exterior of a multi-story 3D concrete printed home located in Tapping, Australia. (Contec Australia)

How 3D concrete printing works

Instead of stacking bricks, Contec’s robotic printer extrudes a specialized concrete mix based on a digital 3D model. The mix sets in under three minutes, allowing new layers to be stacked without scaffolding or formwork.

Advertisement

The walls are printed in precise layers over the course of 18 hours of active machine time. Once the structural shell is complete, traditional crews step in to add the roof, wiring, windows, flooring and finishing touches.

WORLD’S BIGGEST 3D-PRINTED SCHOOLS ARE UNDERWAY IN QATAR

Bathroom of a multi-story 3D concrete printed home located in Tapping, Australia. (Contec Australia)

Benefits that could apply in the U.S.

Speed: Structural walls finished in 18 hours; full build completed in five months.
Cost efficiency: 22% cheaper than comparable masonry builds in WA.
Design freedom: Complex shapes, curves and openings without added expense.
Sustainability: 30% lower CO₂ emissions than conventional concrete and minimal waste.
Durability: More than three times stronger than brick, fire- and water-resistant and able to withstand harsh weather.

Dining room of a multi-story 3D concrete printed home located in Tapping, Australia. (Contec Australia)

Advertisement

How this compares to 3D printed homes in the U.S.

You may have already heard of Icon, the Texas-based startup that has been pioneering 3D printed homes. Icon’s builds include entire neighborhoods of single-story houses in Austin, as well as experimental multi-level projects. However, most of Icon’s multi-story designs rely on a hybrid approach, with 3D printing for the ground floor and timber or steel frames for the upper levels.

That’s what makes the Tapping project stand out. Contec printed the structural walls for both stories in just 18 hours of active printing time, something not yet widely seen in the U.S. This could signal the next step for American 3D printing: scaling beyond single-story housing into more complex multi-story designs.

BRICKS MADE FROM RECYCLED COFFEE GROUNDS REDUCE EMISSIONS AND COSTS

Bedroom of a multi-story 3D concrete printed home located in Tapping, Australia.  (Contec Australia)

How much does a 3D printed home cost?

One of the biggest questions people have is price. Contec hasn’t shared the exact cost of the Tapping home, but the company says it delivered the structural walls 22% cheaper than a standard masonry build. That saving adds up when you consider how much of a home’s budget goes toward labor and materials.

Advertisement

In the U.S., companies like Icon have priced 3D printed homes starting around $100,000 to $150,000, depending on size and finishes. While final costs vary by region, land and design, the potential savings from reduced labor and faster timelines make 3D printing an attractive option as housing costs continue to rise.

VERTICAL TINY HOMES REDEFINE COMPACT LIVING

Kitchen and dining room of a multi-story 3D concrete printed home located in Tapping, Australia. (Contec Australia)

What this means for you

For American homeowners, builders and communities, the Tapping project shows how 3D concrete printing could offer faster, cheaper and more resilient housing. Imagine moving into a new home months earlier, with walls that are stronger, more sustainable and better able to handle extreme conditions.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right – and what needs improvement. Take my Quiz here: Cyberguy.com.

Advertisement

Kurt’s key takeaways

3D printed housing is moving from concept to reality. This home shows that walls can go up in just 18 hours, and a full build can be finished in only a few months. That kind of speed changes the way we think about construction. With rising costs and ongoing labor shortages, builders need new solutions. 3D concrete printing offers a path to faster, more affordable and more sustainable homes without cutting corners on strength or safety.

The big question is, if a 3D-printed home became available in your area, would you move in? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2025 CyberGuy.com. All rights reserved. 

Advertisement

Technology

$163K in fake medical bill charges; AI uncovers it for you

Published

on

3K in fake medical bill charges; AI uncovers it for you

NEWYou can now listen to Fox News articles!

Last summer, a man’s brother-in-law suffered a fatal heart attack. The hospital bill for four hours of emergency care: $195,628.

The man’s sister-in-law was ready to pay it. He asked her to wait. He requested an itemized bill with CPT codes, the universal billing codes hospitals use, and fed the whole thing into Claude, an AI chatbot.

Within minutes, Claude found duplicate charges, services billed as “inpatient” even though the patient was never admitted, supply costs inflated by 500% to 2,300% above Medicare rates and charges for procedures that never happened. He cross-checked with ChatGPT. Both AIs agreed. He wrote a six-page letter citing every violation by name.

The hospital dropped the bill to $33,000. An 83% reduction. Zero medical training. A $20 app.

Advertisement

A man cross-checked a hospital bill with AI and got it reduced by some 83%. (Neil Godwin/Getty Images)

Your bill is probably wrong, too

That story sounds extreme. It’s not.

The Medical Billing Advocates of America estimates 3 out of 4 medical bills contain errors. The average hospital bill over $10,000 has roughly $1,300 in mistakes. And less than 1% of denied insurance claims are ever appealed. Hospitals and insurers are banking on the fact that you won’t check.

AI flips that equation. You don’t need to understand CPT codes or have a medical billing degree. You just need to paste.

You can use AI platforms, like ChatGPT, to spot errors or suspicious charges on medical bills. (Jaap Arriens/NurPhoto via Getty Images)

Advertisement

The 5-minute audit

Step 1: Call your provider and request an itemized bill with CPT codes. Not the summary. The full line-by-line breakdown. You’re legally entitled to this.

Step 2: Open ChatGPT, Claude, Grok or Gemini (free versions work) and paste this:

“I’m pasting my itemized medical bill below. Please: (1) Explain every charge in plain English, (2) Flag any duplicate or suspicious charges, (3) Compare each charge to average costs, (4) Identify billing code errors or bundling violations, and (5) Draft a dispute letter I can send to the billing department. Here’s my bill:”

Step 3: Paste your bill. The AI will translate every line and tell you what looks wrong.

WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED

Advertisement

If the AI finds errors, call the billing department and ask for a supervisor. (iStock)

Step 4: If the AI finds errors (it probably will), call the billing department and ask for a supervisor. Reference the specific codes. Hospitals resolve disputes all the time when patients show up prepared.

Pro tip: Counterforce Health (counterforcehealth.org) is a free AI tool built specifically for insurance denial appeals. Worth bookmarking.

It’s time to give your medical bills a thorough examination. The AI will see you now.

Real talk. Everybody’s talking about AI. Nobody’s showing you what to actually DO with it. My new free newsletter, Splash of AI (SplashofAI.com), gives you one trick, one tool and one “wait, I can do THAT?” moment every single week. Five minutes. Plain English. The kind of stuff that saves you time, money or both. You’ll wonder how you got by without it.

Advertisement

Send this to someone who is staring at a medical bill they can’t make sense of. Forward this right now. Seriously. This could save them hundreds or even thousands of dollars, and it takes less time than making coffee.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Get tech-smarter. Starting today.

Kim Komando cuts through the tech noise so you don’t have to. Real advice. Zero jargon. Every single day.

Catch the national radio show on 500-plus stations, get the free daily newsletter, watch on YouTube or listen to the podcast wherever you get your shows. It’s all waiting at Komando.com.

Copyright 2026, WestStar Multimedia Entertainment. All rights reserved.

Advertisement

Related Article

ChatGPT could miss your serious medical emergency, new study suggests
Continue Reading

Technology

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Published

on

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Meta’s AI-powered smart glasses could be sending sensitive footage to human reviewers in Nairobi, Kenya, according to an investigation by the Swedish outlets Svenska Dagbladet and Göteborgs-Posten. The report, which was published last week, claims Meta contractors in Kenya have seen videos captured with the smart glasses that show “bathroom visits, sex and other intimate moments.”

So far, at least one proposed class action lawsuit accusing Meta of violating false advertising and privacy laws has emerged in response to Svenska Dagbladet’s reporting, citing the company’s claim that its smart glasses are designed for privacy:

By affirmatively claiming that the Glasses were designed to protect privacy, Meta assumed a duty to disclose material facts that would inform a reasonable consumer’s decision to purchase the product. Instead, Meta hid the alarming reality: that use of the AI features results in a stranger halfway around the world watching the most private moments of a person’s life.

The Nairobi-based contractors interviewed by Svenska Dagbladet are AI annotators, meaning they label images, text, or audio, with the goal of helping AI systems make sense of the data they’re training on. “We see everything — from living rooms to naked bodies,” one worker says, according to Svenska Dagbladet. “Meta has that type of content in its databases.”

A former Meta employee reportedly tells Svenska Dagbladet that faces in annotation data are blurred automatically, though workers in Kenya say this “does not always work as intended,” and some faces are still visible. Another person reportedly tells the outlet that a wearer’s bank cards are sometimes seen in the footage they review as well.

Meta’s Ray-Ban and Oakley smart glasses come with a built-in AI assistant capable of answering questions about what a user can see. The glasses have soared in popularity in recent years, despite growing concerns over privacy and surveillance.

Advertisement

EssilorLuxottica, the eyewear giant that Meta works with to develop the camera-equipped glasses, sold over 7 million of the AI-powered glasses in 2025 — more than tripling its sales in 2023 and 2024 combined. Last year, Meta made some changes to its privacy policy that keep Meta AI with camera use enabled on your glasses “unless you turn off ‘Hey Meta.’” It also stopped allowing wearers to opt out of storing their voice recordings in the cloud.

As reported by Svenska Dagbladet, the Kenya-based AI reviewers work with transcriptions as well, ensuring Meta AI provides the correct answer to the questions users ask aloud. In a statement to The Verge, Meta spokesperson Tracy Clayton says media captured by its smart glasses “stays on the user’s device” unless they choose to share it with other people or Meta.

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do,” Clayton says. “We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”

Continue Reading

Technology

Inside Microsoft’s AI content verification plan

Published

on

Inside Microsoft’s AI content verification plan

NEWYou can now listen to Fox News articles!

Scroll your social media feed for five minutes. You will likely see something that looks real but feels slightly off.

Maybe it is a viral protest image that turns out to be altered. Maybe it is a slick video pushing a political narrative. Or maybe it is an artificial intelligence voice clip that spreads before anyone stops to question it.

AI-enabled deception now permeates everyday life. And Microsoft says it has a technical blueprint to help verify where online content comes from and whether it has been altered.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

Microsoft’s proposal would attach digital fingerprints and metadata to help trace where online content originated. (YorVen/Getty Images)

Why AI-generated content feels more convincing today

AI tools can now generate hyperrealistic images, clone voices and create interactive deepfakes that respond in real time. What once required a studio or intelligence agency now requires a browser window. That shift changes the stakes.

It is no longer about spotting obvious fakes. It is about navigating a digital world where manipulated content blends into your daily scroll. Even when viewers know something is AI-generated, they often engage with it anyway. Labels alone do not automatically stop belief or sharing. So Microsoft is proposing something more structured.

How Microsoft’s AI content verification system works

To understand Microsoft’s approach, picture the process of authenticating a famous painting. An owner would carefully document its history and record every change in possession. Experts might add a watermark that machines can detect, but viewers cannot see. They could also generate a mathematical signature based on the brush strokes.

Now Microsoft wants to bring that same discipline to digital content. The company’s research team evaluated 60 different tool combinations, including metadata tracking, invisible watermarks and cryptographic signatures. Researchers also stress-tested those systems against real-world scenarios such as stripped metadata, subtle pixel changes or deliberate tampering.

Advertisement

Rather than deciding what is true, the system focuses on origin and alteration. It is designed to show where the content started and whether someone changed it along the way.

What AI content verification can and cannot prove

Before relying on these tools, you need to understand their limits. Verification systems can flag whether someone altered content, but they cannot judge accuracy or interpret context. They also cannot determine meaning. For example, a label may indicate that a video contains AI-generated elements. It will not explain whether the broader narrative is misleading.

Even so, experts believe widespread adoption could reduce deception at scale. Highly skilled actors and some governments may still find ways around safeguards. However, consistent verification standards could reduce a significant share of manipulated posts. Over time, that shift could reshape the online environment in measurable ways.

Why AI labels create a business dilemma for social platforms

Here is where the tension becomes real. Platforms depend on engagement. Engagement often feeds on outrage or shock. And AI-generated content can drive both. If clear AI labels reduce clicks, shares or watch time, companies face a difficult choice. Transparency can clash with business incentives.

FAKE ERROR POPUPS ARE SPREADING MALWARE FAST

Advertisement

Invisible watermarks and cryptographic signatures could signal when images or videos have been altered. (Chona Kasinger/Bloomberg via Getty Images)

Audits of major platforms already show inconsistent labeling of AI-generated posts. Some receive tags. Many slip through without disclosure.

Now, U.S. regulations are stepping in. California’s AI Transparency Act is set to require clearer disclosure of AI-generated material, and other states are considering similar rules. Lawmakers want stronger safeguards.

Still, implementation matters. If companies rush verification tools or apply them inconsistently, public trust could erode even faster.

The risk of incorrect AI labels and false flags

Researchers also warn about sociotechnical attacks. Imagine someone takes a real photo of a tense political event and modifies only a small portion of it. A weak detection system flags the entire image as AI-manipulated.

Advertisement

Now, a genuine image is treated as suspect. Bad actors could exploit imperfect systems to discredit real evidence. That is why Microsoft’s research stresses combining provenance tracking with watermarking and cryptographic signatures. Precision matters. Overreach could undermine the entire effort.

How to protect yourself from AI-generated misinformation

While industry standards evolve, you still need personal safeguards.

1) Slow down before sharing

If a post triggers a strong emotional reaction, pause. Emotional manipulation is often intentional.

2) Check the original source

Look beyond reposts and screenshots. Find the first publication or account.

3) Cross-check major claims

Search for coverage from reputable outlets before accepting dramatic narratives.

Advertisement

4) Verify suspicious images and videos

Use reverse image search tools to see where a photo first appeared. If the earliest version looks different, someone may have altered it.

5) Be skeptical of shocking voice recordings

AI tools can clone voices using short samples. If a recording makes explosive claims, wait for confirmation from trusted outlets.

6) Avoid relying on a single feed

Algorithms show you more of what you already engage with. Broader sources reduce the risk of getting trapped in manipulated narratives.

7) Treat labels as signals, not verdicts

An AI-generated tag offers context. It does not automatically make content harmful or false.

8) Keep devices and software updated

Malicious AI content sometimes links to phishing sites or malware. Updated systems reduce exposure.

Advertisement

Strengthen account security

Use strong, unique passwords and a reputable password manager to generate and store complex logins for you. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com. Also, enable multi-factor authentication where available. No system is perfect. But layered awareness makes you a harder target.

Experts say stronger AI labeling standards may reduce deception, but they cannot determine what is true. (iStock)

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

Microsoft’s AI content verification plan signals that the industry understands the urgency. The internet is shifting from a place where we question sources to a place where we question reality itself. Technical standards could reduce manipulation at scale. But they cannot fix human psychology. People often believe what aligns with their worldview, even when labels suggest caution. Verification may help restore some trust online. Yet trust is not built by code alone.

So here is the question. If every post in your feed came with a digital fingerprint and an AI label, would that actually change what you believe?  Let us know by writing to us at Cyberguy.com.

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Related Article

Why the Microsoft 365 Copilot bug matters for data security
Continue Reading

Trending