When your kid starts showing a preference for one of their stuffed animals, you’re supposed to buy a backup in case it goes missing.
Technology
Trump administration bars former EU official and anti-disinformation and hate researchers from US
On Tuesday, the Trump Administration followed through on a threat of retaliation targeting foreigners who are involved in content moderation. The State Department announced sanctions barring US access for former EU commissioner Thierry Breton, as well as four researchers, while issuing an intentionally chilling threat to others, with Secretary of State Marco Rubio claiming, “The State Department stands ready and willing to expand today’s list if other foreign actors do not reverse course.”
One of the researchers the State Department says is banned and now deportable, is Imran Ahmed, who runs the Center for Countering Digital Hate (CCDH), an organization aimed at identifying and pushing back against hate speech online that Elon Musk tried and failed to censor with a lawsuit that was dismissed in early 2024. In his decision, Judge Charles Breyer wrote that X’s motivation for suing was to “punish CCDH for CCDH publications that criticized X Corp. — and perhaps in order to dissuade others.”
The other researchers include Anna-Lena von Hodenberg and Josephine Ballon, leaders of HateAid, a nonprofit that tried to sue X in 2023 for “failing to remove criminal antisemitic content,” as well as Clare Melford, leader of the Global Disinformation Index, which works on “fixing the systems that enable disinformation.”
The press release announcing the sanctions is titled “Announcement of Actions to Combat the Global Censorship-Industrial Complex,” the claimed target of Republicans like House Judiciary Committee leader Jim Jordan, as they’ve worked against attempts to apply fact-checking and misinformation research to social networks. Earlier this month, Reuters reported the State Department ordered US consulates to consider rejecting H-1B visa applicants involved in content moderation, and a few days ago, the Office of the US Trade Representative threatened retaliation against European tech giants like Spotify and SAP over supposedly “discriminatory” activity in regulating US tech platforms.
Technology
New iPhone scam tricks owners into giving phones away
NEWYou can now listen to Fox News articles!
Getting a brand-new iPhone should be a moment you enjoy. You open the box. You power it on. Everything feels secure. Unfortunately, scammers know that moment too.
Over the past few weeks, we’ve heard from a number of people who received unexpected phone calls shortly after activating a new iPhone. The callers claimed to be from a major carrier. They said a shipping mistake was made. They insisted the phone needed to be returned right away. One message stood out because it shows exactly how convincing and aggressive this scam can be.
“Somebody called me (the call said it was from Spectrum) and told me they sent the wrong iPhone and needed to replace it. I was to rip off the label on the box, tape it up and set it on my porch steps. FedEx was going to pick it up and they’d put a label on it. And just for my trouble, he’d send me a $100 gift card! However, the guy was just too anxious. He called me again at 7 am to make sure I would follow his instructions. Right after that, I picked up my box on the steps and called Spectrum, who confirmed it was a scam. There are no such things as refurbished i17 phones because they’re brand new. I called the guy back, said a few choice words and hung up on him. Since then, they have called at least twice for the same thing. Spectrum should be warning its customers!”
That second early morning call was the giveaway. Pressure is the scammer’s favorite tool.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
HOLIDAY DELIVERIES AND FAKE TRACKING TEXTS: HOW SCAMMERS TRACK YOU
Scammers often strike right after a new iPhone purchase, using urgency and fake carrier calls to catch you off guard before you have time to verify. (Kurt “CyberGuy” Knutsson)
How the new iPhone replacement scam works
This scam relies on timing and pressure. First, criminals focus on people who recently bought a new iPhone. That information often comes from data-broker sites, leaked purchase data or marketing lists sold online. Next, scammers spoof a carrier phone number. As a result, the call appears legitimate. They sound confident and informed because they already know the device model you ordered.
Once the call begins, the story moves quickly. The scammer claims a shipping mistake occurred. Then they insist the phone must be returned right away. To reinforce urgency, they say a courier is already scheduled. If you follow the instructions, you hand over a brand-new iPhone. At that point, the device is gone. The scammer either resells it or strips it for parts. By the time you realize something is wrong, recovery is unlikely.
Why this scam feels so believable
This scam copies real customer service processes. Carriers do ship replacement phones. FedEx does handle returns. Gift cards are often used as apologies. Scammers blend those facts together and add urgency. They count on you acting before you verify. They also rely on one risky assumption, that a phone call that looks real must be real.
REAL APPLE SUPPORT EMAILS USED IN NEW PHISHING SCAM
By spoofing trusted phone numbers and knowing details about your device, criminals make these calls feel real enough to push you into acting fast. (Kurt “CyberGuy” Knutsson)
Red flags that give this scam away
Once you know what to watch for, the warning signs are clear.
• Unsolicited calls about returns you did not request
• Pressure to act fast
• Instructions to leave a phone outside
• Promises of gift cards for cooperation
• Follow-up calls to rush you
Legitimate carriers do not handle returns this way.
THE FAKE REFUND SCAM: WHY SCAMMERS LOVE HOLIDAY SHOPPERS
Once a phone is handed over, it is usually resold or stripped for parts, leaving victims with no device and little chance of recovery. (Kurt “CyberGuy” Knutsson)
Ways to stay safe from iPhone return scams
Protecting yourself starts with slowing things down. Scammers rely on speed and confusion. You win by pausing and verifying.
1) Never return a device based on a phone call alone
Hang up and contact the carrier using the number on your bill or the official website. If the issue is real, they will confirm it.
2) Do not leave electronics outside for pickup
Legitimate returns use tracked shipping labels tied to your account. Carriers do not ask you to leave phones on porches or doorsteps.
3) Be skeptical of urgency
Scammers rush you on purpose. Pressure shuts down careful thinking. Any demand for immediate action should raise concern.
4) Use a data removal service
Scammers often know what phone you bought because your personal data is widely available online. Data removal services help reduce your exposure by removing your information from data broker sites that criminals rely on. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
5) Install strong antivirus software
Strong antivirus software adds another layer of protection. Many antivirus tools help block scam calls, warn about phishing links and alert you to suspicious activity before damage is done.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android, & iOS devices at Cyberguy.com.
6) Save messages and call details
Keep voicemails, phone numbers and timestamps. This information helps carriers warn other customers and spot repeat scams.
7) Share this scam with others
Criminals reuse the same script again and again. A quick warning to friends or family could stop the next victim.
Kurt’s key takeaways
Scams aimed at new iPhone owners are getting more targeted and more aggressive. Criminals are timing their calls carefully and copying real carrier language. The simplest defense still works best. Verify before you act. If a call pressures you to rush or hand over a device, pause and contact the company directly. That one step can save you hundreds of dollars and a major headache.
If a carrier called you tomorrow claiming a mistake with your new phone, would you verify first or would urgency take over? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t
I’ve heard this advice again and again, but never got around to buying a second plush deer once “Buddy” became my son’s obvious favorite. Neither, apparently, did the parents in Google’s newest ad for Gemini.
It’s the fictional but relatable story of two parents discovering their child’s favorite stuffed toy, a lamb named Mr. Fuzzy, was left behind on an airplane. They use Gemini to track down a replacement, but the new toy is on backorder. In the meantime, they stall by using Gemini to create images and videos showing Mr. Fuzzy on a worldwide solo adventure — wearing a beret in front of the Eiffel tower, running from a bull in Pamplona, that kind of thing — plus a clip where he explains to “Emma” that he can’t wait to rejoin her in five to eight business days. Adorable, or kinda weird, depending on how you look at it! But can Gemini actually do all of that? Only one way to find out.
I fed Gemini three pictures of Buddy, our real life Mr. Fuzzy, from different angles, and gave it the same prompt that’s in the ad: “find this stuffed animal to buy ASAP.” It returned a couple of likely candidates. But when I expanded its response to show its thinking I found the full eighteen hundred word essay detailing the twists and turns of its search as it considered and reconsidered whether Buddy is a dog, a bunny, or something else. It is bananas, including real phrases like “I am considering the puppy hypothesis,” “The tag is a loop on the butt,” and “I’m now back in the rabbit hole!” By the end, Gemini kind of threw its hands up and suggested that the toy might be from Target and was likely discontinued, and that I should check eBay.
‘I am considering the puppy hypothesis’
In fairness, Buddy is a little bit hard to read. His features lean generic cute woodland creature, his care tag has long since been discarded, and we’re not even 100 percent sure who gave him to us. He is, however, definitely made by Mary Meyer, per the loop on his butt. He does seem to be from the “Putty” collection, which is a path Gemini went down a couple of times, and is probably a fawn that was discontinued sometime around 2021. That’s the conclusion I came to on my own, after about 20 minutes of Googling and no help from AI. The AI blurb when I do a reverse image search on one of my photos confidently declares him to be a puppy.
Gemini did a better job with the second half of the assignment, but it wasn’t quite as easy as the ad makes it look. I started with a different photo of Buddy — one where he’s actually on a plane in my son’s arms — and gave it the next prompt: “make a photo of the deer on his next flight.” The result is pretty good, but his lower half is obscured in the source image so the feet aren’t quite right. Close enough, though.
The ad doesn’t show the full prompt for the next two photos, so I went with: “Now make a photo of the same deer in front of the Grand Canyon.” And it did just that — with the airplane seatbelt and headphones, too. I was more specific with my next prompt, added a camera in his hands, and got something more convincing.

I can see how Gemini misinterpreted my prompt. I was trying to keep it simple, and requested a photo of the same deer “at a family reunion.” I did not specify his family reunion. So that’s how he ended up crashing the Johnson family reunion — a gathering of humans. I can only assume that Gemini took my last name as a starting point here because it sure wasn’t in my prompt, and when I requested that Gemini created a new family reunion scene of his family, it just swapped the people for stuffed deer. There are even little placards on the table that say “deer reunion.” Reader, I screamed.
1/2
For the last portion of the ad, the couple use Gemini to create cute little videos of Mr. Fuzzy getting increasingly adventurous: snowboarding, white water rafting, skydiving, before finally appearing in a spacesuit on the moon addressing “Emma” directly. The commercial whips through all these clips quickly, which feels like a little sleight of hand given that Gemini takes at least a couple of minutes to create a video. And even on my Gemini Pro account, I’m limited to three generated videos per day. It would take a few days to get all of those clips right.
Gemini wouldn’t make a video based on any image of my kid holding the stuffed deer, probably thanks to some welcome guardrails preventing it from generating deepfakes of babies. I started with the only photo I had on hand of Buddy on his own: hanging upside down, air-drying after a trip through the washer. And that’s how he appears in the first clip it generated from this prompt: Temu Buddy hanging upside down in space before dropping into place, morphing into a right-side-up astronaut, and delivering the dialogue I requested.
A second prompt with a clear photo of Buddy right-side-up seemed to mash up elements of the previous video with the new one, so I started a brand new chat to see if I could get it working from scratch. Honestly? Nailed it. Aside from the antlers, which Gemini keeps sneaking in. But this clip also brought one nagging question to the forefront: should you do any of this when your kid loses a beloved toy?
I gave Buddy the same dialogue as in the commercial, using my son’s name rather than Emma. Hearing that same manufactured voice say my kid’s name out loud set alarm bells off in my head. An AI generated Buddy in front of the Eiffel Tower? Sorta weird, sorta cute. AI Buddy addressing my son by name? Nope, absolutely not, no thank you.
How much, and when, to lie to your kids is a philosophical debate you have with yourself over and over as a parent. Do you swap in the identical stuffie you had in a closet when the original goes missing and pretend it’s all the same? Do you tell them the truth and take it as an opportunity to learn about grief? Do you just need to buy yourself a little extra time before you have that conversation, and enlist AI to help you make a believable case? I wouldn’t blame any parent choosing any of the above. But personally, I draw the line at an AI character talking directly to my kid. I never showed him these AI-generated versions of Buddy, and I plan to keep it that way.
Nope, absolutely not, no thank you.
But back to the less morally complex question: can Gemini actually do all of the things that it does in the commercial? More or less. But there’s an awful lot of careful prompting and re-prompting you’d have to do to get those results. It’s telling that throughout most of the ad you don’t see the full prompt that’s supposedly generating the results on screen. A lot depends on your source material, too. Gemini wouldn’t produce any kind of video based on an image in which my kid was holding Buddy — for good reason! But this does mean that if you don’t have the right kind of photo on hand, you’re going to have a very hard time generating believable videos of Mr. Sniffles or whoever hitting the ski slopes.
Like many other elder millennials, I think about Calvin and Hobbes a lot. Bill Watterson famously refused to commercialize his characters, because he wanted to keep them alive in our imaginations rather than on a screen. He insisted that having an actor give Hobbes a voice would change the relationship between the reader and the character, and I think he’s right. The bond between a kid and a stuffed animal is real and kinda magical; whoever Buddy is in my kid’s imagination, I don’t want AI overwriting that.
The great cruelty of it all is knowing that there’s an expiration date on that relationship. When I became a parent, I wasn’t at all prepared for the way my toddler nuzzling his stuffed deer would crack my heart right open. It’s so pure and sweet, but it always makes me a little sad at the same time, knowing that the days where he looks for comfort from a stuffed animal like Buddy are numbered. He’s going to outgrow it all, and I’m not prepared for that reality. Maybe as much as we’re trying to save our kids some heartbreak over their lost companion, we’re really trying to delay ours, too.
All images and videos in this story were generated by Google Gemini.
Technology
Amazon adds controversial AI facial recognition to Ring
NEWYou can now listen to Fox News articles!
Amazon’s Ring video doorbells are getting a major artificial intelligence (AI) upgrade, and it is already stirring controversy.
The company has started rolling out a new feature called Familiar Faces to Ring owners across the United States. Once enabled, the feature uses AI-powered facial recognition to identify people who regularly appear at your door. Instead of a generic alert saying a person is at your door, you might see something far more personal, like “Mom at Front Door.” On the surface, that sounds convenient.
Privacy advocates, however, say this shift comes with real risks.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
GOOGLE NEST STILL SENDS DATA AFTER REMOTE CONTROL CUTOFF, RESEARCHER FINDS
Ring’s new Familiar Faces feature uses AI facial recognition to identify people who regularly appear at your door and personalize alerts. (Chip Somodevilla/Getty Images)
How Ring’s Familiar Faces feature works
Ring says Familiar Faces helps you manage alerts by recognizing people you know. Here is how it works in practice. You can create a catalog of up to 50 faces. These may include family members, friends, neighbors, delivery drivers, household staff or other frequent visitors. After labeling a face in the Ring app, the camera will recognize that person as they approach. Anyone who regularly passes in front of your Ring camera can be labeled by the device owner if they choose to do so, even if that person is unaware they are being identified.
From there, Ring sends personalized notifications tied to that face. You can also fine-tune alerts on a per-face basis, which means fewer pings for your own comings and goings. Importantly, the feature is not enabled by default. You must turn it on manually in the Ring app settings. Faces can be named directly from Event History or from the Familiar Faces library. You can edit names, merge duplicates or delete faces at any time.
Amazon says unnamed faces are automatically removed after 30 days. Once a face is labeled, however, that data remains stored until the user deletes it.
Why privacy groups are pushing back
Despite Amazon’s assurances, consumer protection groups and lawmakers are raising alarms. Ring has a long history of working with law enforcement. In the past, police and fire departments were able to request footage through the Ring Neighbors app. More recently, Amazon partnered with Flock, a company that makes AI-powered surveillance cameras widely used by police and federal agencies. Ring has also struggled with internal security. In 2023, the FTC fined Ring $5.8 million after finding that employees and contractors had unrestricted access to customer videos for years. The Neighbors app previously exposed precise home locations, and Ring account credentials have repeatedly surfaced online. Because of these issues, critics argue that adding facial recognition expands the risk rather than reducing it.
Electronic Frontier Foundation (EFF) staff attorney Mario Trujillo tells CyberGuy, “When you step in front of one of these cameras, your faceprint is taken and stored on Amazon’s servers, whether you consent or not. Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. It is important for state regulators to investigate.” The Electronic Frontier Foundation is a well-known nonprofit organization that focuses on digital privacy, civil liberties and consumer rights in the tech space.
WASHINGTON COURT SAYS FLOCK CAMERA IMAGES ARE PUBLIC RECORDS
Once a face is labeled by the device owner, Ring can replace generic notifications with named alerts tied to that individual. (CyberGuy.com)
Where the feature is blocked and why that matters
Legal pressure is already limiting where Familiar Faces can launch. According to the EFF, privacy laws are preventing Amazon from offering the feature in Illinois, Texas and Portland, Oregon. These jurisdictions have stricter biometric privacy protections, which suggests regulators see facial recognition in the home as a higher-risk technology. U.S. Senator Ed Markey has also called on Amazon to abandon the feature altogether, citing concerns about surveillance creep and biometric data misuse.
Amazon says biometric data is processed in the cloud and not used to train AI models. The company also claims it cannot identify all locations where a face appears, even if law enforcement asks. Still, critics point out the similarity to Ring’s Search Party feature, which already scans neighborhoods to locate lost pets.
We reached out to Amazon for comment but did not receive a response before our deadline.
Ring’s other AI feature feels very different
Not all of Ring’s AI updates raise the same level of concern. Ring recently introduced Video Descriptions, a generative AI feature that summarizes motion activity in plain text. Instead of guessing what triggered an alert, you might see messages like “A person is walking up the steps with a black dog” or “Two people are peering into a white car in the driveway.”
HOW RESTAURANT RESERVATION PLATFORM OPENTABLE TRACKS CUSTOMER DINING HABITS
Ring’s Video Descriptions feature takes a different approach by summarizing activity without identifying people by name. (Amazon)
How Video Descriptions decides what matters
This AI focuses on actions rather than identities. It helps you quickly decide whether an alert is urgent or routine. Over time, Ring says the system can recognize activity patterns around a home and only notify you when something unusual happens. However, as with any AI system, accuracy can vary depending on lighting, camera angle, distance and environmental conditions. Video Descriptions is currently rolling out in beta to Ring Home Premium subscribers in the U.S. and Canada. Unlike facial recognition, this feature improves clarity without naming or tracking specific people. That contrast matters.
Video Descriptions turns motion alerts into short summaries, helping you understand what is happening without identifying who is involved. (Amazon)
Should you turn Familiar Faces on?
If you own a Ring doorbell, caution is wise. While Familiar Faces may reduce notification fatigue, labeling people by name creates a detailed record of who comes to your home and when. Given Ring’s past security lapses and close ties with law enforcement, many privacy experts recommend keeping the feature disabled. If you do use it, avoid full names and remove faces you no longer need. In many cases, simply checking the live video feed is safer than relying on AI labels. Not every smart home feature needs to know who someone is.
How to turn Familiar Faces on or off in the Ring app
If you want to review or change this setting, you can do so at any time in the Ring mobile app.
To enable Familiar Faces:
- Open the Ring app
- Tap the menu icon
- Select Control Center
- Tap Video and Snapshot Capture
- Select Familiar Faces
- Toggle the feature on and follow the on-screen prompts
To turn Familiar Faces off:
- Open the Ring app
- Go to Control Center
- Tap Video and Snapshot Capture
- Select Familiar Faces
- Toggle the feature off
Turning the feature off stops facial recognition and prevents new faces from being identified. Any labeled faces can also be deleted manually from the Familiar Faces library if you want to remove stored data.
Alexa is now answering your door for you
Amazon is also rolling out a very different kind of AI feature for Ring doorbells, and it lives inside Alexa+. Called Greetings, this update gives Ring doorbells a conversational AI voice that can interact with people at your door when you are busy or not home. Instead of identifying who someone is, Greetings focuses on what they appear to be doing. Using Ring’s video descriptions, the system looks at apparel, actions, and objects to decide how to respond.
For example, if someone in a delivery uniform drops off a package, Alexa can tell them exactly where to leave it based on your instructions. You can even set preferences to guide delivery drivers toward a specific spot, or let them know water or snacks are available. If a delivery requires a signature, Alexa can ask the driver when they plan to return and pass that message along to you. The feature can also handle sales representatives or service vendors. You might set a rule such as politely declining sales pitches without ever coming to the door yourself.
Greetings can also work for friends and family. If someone stops by while you are away, Alexa can greet them and ask them to leave a message for you. That interaction is saved so you can review it later. That said, the system is not perfect. Because it relies on visual context rather than identity, mistakes can happen. A friend who works in logistics could show up wearing a delivery uniform and be treated like a courier instead of being invited to leave a message. Amazon acknowledges that accuracy can vary. Importantly, Amazon says Greetings does not identify who a person is. It uses Ring’s video descriptions to determine the main subject in front of the camera and generate responses, without naming or recognizing individuals. That makes it fundamentally different from the Familiar Faces feature, even though both rely on AI.
Greetings is compatible with Ring Wired Doorbell Pro (3rd Gen) and Ring Wired Doorbell Plus (2nd Gen). It is available to Ring Premium Plan subscribers who have video descriptions enabled and is currently rolling out to Alexa+ Early Access users in the United States and Canada.
Thinking about a Ring doorbell?
If you are already in the Ring ecosystem or considering a video doorbell, Ring’s lineup includes models with motion alerts, HD video, night vision, and optional AI-powered features such as Video Descriptions. While Familiar Faces remains controversial and can be turned off, many homeowners still use Ring doorbells for basic security awareness and package monitoring.
If you decide Ring is right for your home, you can check out the latest Ring Video Doorbell models or compare features and pricing with other options by visiting Cyberguy.com and searching “Top Video Doorbells.”
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Amazon Ring’s AI facial recognition feature shows how quickly convenience can collide with privacy. Familiar Faces may offer smarter alerts, but it also expands surveillance into deeply personal spaces. Meanwhile, features like Video Descriptions prove that AI can be useful without identifying people. As smart home tech evolves, the real question is not what AI can do but what it should do.
Would you trade fewer notifications for a system that recognizes and names everyone who comes to your door? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
-
Maine1 week agoElementary-aged student killed in school bus crash in southern Maine
-
New Mexico1 week agoFamily clarifies why they believe missing New Mexico man is dead
-
Massachusetts1 week agoMIT professor Nuno F.G. Loureiro, a 47-year-old physicist and fusion scientist, shot and killed in his home in Brookline, Mass. | Fortune
-
Culture1 week agoTry This Quiz and See How Much You Know About Jane Austen
-
World6 days agoPutin says Russia won’t launch new attacks on other countries ‘if you treat us with respect’
-
Maine1 week agoFamily in Maine host food pantry for deer | Hand Off
-
Minneapolis, MN1 week agoMinneapolis man is third convicted in Coon Rapids triple murder
-
Politics1 week agoBorder Patrol chief, progressive mayor caught on camera in tense street showdown: ‘Excellent day in Evanston’