Connect with us

Boston, MA

AI is widespread in higher ed, but is it helping or hurting student learning?

Published

on

AI is widespread in higher ed, but is it helping or hurting student learning?


Last February, Northeastern University student Ella Stapleton was struggling through her organizational behavior class. She began reviewing the notes her professor created outside of class early in the semester to see if it could guide her through the course content. But there was a problem: Stapleton said the notes were incomprehensible. 

“It was basically like just word vomit,” said Stapleton.

While scrolling through a document her professor created, Stapleton said she found a ChatGPT inquiry had been accidentally copied and pasted into the document. A section of notes also contained a ChatGPT-generated content disclaimer.

Stapleton believes her adjunct professor was overworked, teaching too many courses at once, and was therefore forced to sacrifice his quality of teaching with a shortcut from artificial intelligence. 

Advertisement

“I personally do not blame the professor, I blame the system,” said Stapleton. 


NBC10 Boston

NBC10 Boston

Ella Stapleton

Advertisement

Stapleton said she printed 60 pages worth of AI-generated content she believed her professor utilized for the class and brought it to a Northeastern staff member to lodge a complaint. She also made a bold demand: a refund for her and each of her classmates for the cost of the class.

“If I buy something for $8,000 and it’s faulty, I should get a refund,” said Stapleton, who has since graduated. “So why doesn’t that logic apply to this?”

Stapleton’s request made national headlines after she shared her story with The New York Times.

The moment on Northeastern’s campus encapsulates a larger issue that higher education institutions are grappling with across the country: how much AI use is ethical in the classroom?

NBC10 Boston collaborated with journalism students at Boston University’s College of Communication who are taking an in-depth reporting class taught by investigative reporter Ryan Kath.

Advertisement

We took a deep dive into how generative AI is changing the approach of higher education, from how students apply it to their everyday work to how universities are responding with academic programs and institutional studies. 

With its widespread use, we also explored this question: what is AI doing to students’ critical thinking skills?

A degree in AI? 

While driving along a highway in rural New Hampshire, a billboard caught our attention.

The message advertised a Bachelor of Science degree in artificial intelligence being offered at Rivier University in Nashua. We decided to visit the campus to learn more about the new program.  

“The mission of Rivier is transforming hearts and minds to serve the world, and that transformation means to change,” said President of Rivier University Sister Paula Marie Buley. 

Advertisement
Sister Paul Marie Buley


NBC10 Boston

NBC10 Boston

Sister Paul Marie Buley

At Rivier University, students pay almost $40,000 for a bachelor’s degree in artificial AI, which will prepare them for a field with a median salary of roughly $145,000, according to the institution.

Upon graduating, the aim of Rivier’s undergraduate program in AI is for students to hold professional practices that allow them to strengthen their skills in the dynamic field. 

Advertisement

Master’s degree programs in artificial intelligence have begun to pop up in universities across New England including Northeastern University, Boston University, and New England College. The first bachelor’s degree in AI was created in 2018 by Carnegie Mellon University, according to Master’s in AI. 

“We want students to enter the mindset of a software engineer or a programmer and really haven’t an idea of what it feels like to work in a particular industry,” said Buley. “The future is here.”

In a 2024 survey from EDUCAUSE, a higher education advocacy nonprofit, 73% of higher education professionals said their institutions’ AI-related planning was driven by the growing use of these tools among students.

At Boston University, students can complete a self-paced, four-hour online course to earn an “AI at BU” student certificate. The course introduces the fundamentals of AI, with modules focused on responsible use, university-wide policies, and practical applications in both academic and professional settings, according to the certificate website.   

Students are also encouraged to reflect on the ethical boundaries of AI tools and how to critically assess their use in coursework.

Advertisement

BU student Lauren McLeod said she doesn’t understand the resistance to AI in education. She believes schools should focus on teaching students to use it strategically. In lieu of clear institution-wide policies, AI usage policies differ from professor to professor.

“Are you using [AI] in a productive way, or using it to cut corners? They just need to change the framework on it and use it as a tool to help you,” said McLeod. “If you don’t use AI, you’re gonna fall behind.”

Despite rising awareness, colleges are slow to develop new policies. Only 20% of colleges and universities have published policies regarding AI use, according to Inside Higher Ed. 

AI and critical thinking

AI is becoming an everyday tool for students in the classroom and on homework assignments, according to Pew Research Center.

Earlier this month, we stopped students along Commonwealth Avenue on BU’s campus to ask how much AI they use and if they think it’s affecting their brains. 

Advertisement

BU student Kelsey Keate said she uses AI in her coding classes and knows she relies on it too much.

Kelsey Keate


NBC10 Boston

NBC10 Boston

Kelsey Keate

“I feel like it’s definitely not helped me learn the code as easily, like I take longer to learn code now,” said Keate. 

Advertisement

That is what worries researchers like Nataliya Kos’myna.

This June, the MIT Media Lab, an interdisciplinary research laboratory, released a study investigating how students’ critical thinking skills are exercised while writing an essay with or without AI assistance.

Kos’myna, an author of the study, said humans are standing at a technological crossroads—a point where it’s necessary to understand what exactly AI is doing to people’s brains. Three groups of 54 students from the Boston area participated in the study.

MIT researcher Nataliya Kos’myna.


NBC10 Boston

Advertisement

NBC10 Boston

MIT researcher Nataliya Kos’myna

“This technology had been implemented and I would actually argue pushed in some cases on us, in all of the aspects of our lives, education, workspace, you name it,” said Kos’myna. 

Tasked with writing an SAT-style essay, one student group had access to AI, one could only use non-AI search engines, and the final group had to use their brain alone, according to the project website. 

Recording the participants’ brain activity, Kos’myna was able to see how engaged students were with their task and how much effort they put into the thought process.

The study ultimately concluded the convenience of AI came at a “cognitive cost.” Participants’ ability to critically evaluate the AI answer to their prompt was diminished. All three groups demonstrated different patterns of brain activity, according to the study. 

Advertisement

Kos’myna found that students in the AI-assisted group didn’t feel much ownership towards their essays and students felt detached from the work they submitted. Graders were able to identify an AI-unique writing structure and noted that the vocabulary and ideas were strikingly similar.

“What we found are some of the things that were actually pretty concerning,” said Kos’myna. 

The paper for the study is awaiting peer review but Kos’myna said the findings were important for them to share. She is urging the scientific community to prioritize more research about AI’s effect on human cognition, especially as it becomes a staple of everyday life. 

After AI discovery, tuition refund rejected 

In the wake of filing a complaint, Stapleton said Northeastern was silent for months. The school eventually put the adjunct professor “on notice” last May after she had graduated.

“Northeastern embraces the responsible use of artificial intelligence to enhance all aspects of its teaching, research, and operations,” said Renata Nyul, vice president for communications at Northeastern University in response to our request for comment. “We have developed an abundance of resources to ensure that both faculty and students use AI as a support system for teaching and learning, not a replacement.” 

Advertisement

In addition to the AI-generated content being difficult to understand and learn from, Stapleton said it doesn’t justify the cost of tuition. In her complaint, Stapleton asked that she and all of her classmates be reimbursed a quarter of their tuition for the course.

Her refund request did not prevail, but Stapleton hopes the attention her story received will provide a teachable moment for colleges around the country.

“In exchange for tuition, [universities] grant you the transfer of knowledge and good teaching,” said Stapleton. “In this case, that fundamentally wasn’t happening, because the only content that we were being given was al AI-generated.”

Grace, Megan and Dahye


NBC10 Boston

Advertisement

NBC10 Boston

Grace Sferrazza, Megan Amato and Dahye Kim report from the field.

The story was written by Amato, Kim and Sferrazza and edited by Kath



Source link

Advertisement

Boston, MA

Between Providence And Boston Is A Vibrant Massachusetts Town Bursting With Diverse Entertainment – Islands

Published

on

Between Providence And Boston Is A Vibrant Massachusetts Town Bursting With Diverse Entertainment – Islands






For some, New England might conjure images of skating rinks, Colonial architecture, and quaint villages. Others might picture waterfront cities like Boston or Providence, rich in history and — in the case of Boston, especially — towering skyscrapers. As you drive between these two capitals along Interstate 95 — a trip that should take about an hour — you’ll pass by towns like Foxborough. For the last few decades, this little community has developed a reputation as a hub of diverse entertainment, making it a worthwhile pit-stop as you journey along the East Coast.

If you’ve ever watched the Patriots kick off from Gillette Stadium on TV, then you’re already familiar with this Massachusetts town. The stadium, considered one of the 10 best in the U.S. for fun activities and events, was completed in 2002, but Foxborough itself has served as the home base for the Patriots since the 1970s. In the decades since, the team has attracted millions of visitors.

Advertisement

Foxborough —  also spelled “Foxboro” — is normally home to about 6,500 year-rounders, but it floods with thousands more people on game or concert days. In total, the stadium can accommodate over 65,000 fans. When you’re not at Gillette Stadium, which is less than 4 miles from the heart of downtown, you’ll find plenty of other things to do. There’s live theater, outdoor recreational opportunities, and an eclectic mix of dining options, each deserving some exploration.

NFL games and Cranberry Bogs in Foxborough

Foxborough is located roughly 30 miles from Boston and just over 20 miles from Providence. In the area, you’ll find plenty of suburbs with historic downtowns and lush trails, like Hopedale, but Foxborough, nicknamed the “Gem of Norfolk County,” has one of the most diverse mixes of entertainment options. Marilyn Rodman Performing Arts Center, for instance, housed in a 1920s-era silent movie theater, offers a busy calendar of comedy and musical performances year-round.

Football fans will also enjoy visiting the Patriots Hall in Patriot Place Mall, which is open daily for $10 per standard ticket. Here, you’ll be able to watch interviews with former players and stroll through a range of exhibits. “I liked all the different memorabilia from all different players all labeled with who and what milestone they came from,” reads one review on Tripadvisor. Afterward, check out the dozens of shopping and dining options in the surrounding mall, which also has its own commuter rail station and connected hotels. Gillette Stadium is next door; along with the Patriots, the venue has hosted performers like Taylor Swift, Bruce Springsteen, and The Rolling Stones.

Advertisement

The Ocean Spray Cranberry Bogs and surrounding nature trails are also part of Patriot Place. Planted back in the 1920s, these bogs continue to thrive. They’re typically harvested in October, when visitors can attend the annual Harvest Festival. At this fun and family-friendly local event, you’ll be able to enjoy an inflatable corn maze, a beer garden, live music, and more.

Where to eat and sleep in Foxborough

As you explore Foxborough, you’ll find a range of dining options, from classic breakfast plates at The Commons, to artisanal burgers at Union Straw. As one reviewer writes about the latter on Google, “[This is a] Gorgeous venue, one of our favorite daytime lunch or date places. All food options are 10/10, truffle burger, gnocchi bolognese, and the flatbread pizzas are delicious and the fries are perfect.”

If you’re planning to spend the night rather than hit the road after a burger at Union Straw or a long football game, you’ll have a range of vacation rentals, local inns, and chain hotels to choose from. The Rally Point Inn & Pub, for instance, is within walking distance of local restaurants and shops. It also has its own sport-themed bar, weekly trivia nights, and karaoke. Just make sure to book your stay well in advance, as places tend to fill up before popular events. 

Advertisement

The nearest airport is also in Providence, but you’ll find more flight options at Boston Logan International. Travelers can also opt for the “Event Train,” which runs between Patriot Place and Boston’s South Station on game days, providing a convenient way to avoid the notorious traffic. Besides the I-95 drive from Providence to Boston, there are plenty of other New England road trip tours you can take through gorgeous small towns. That being said, you’ll be hard-pressed to find a destination that attracts as many annual visitors as Foxborough.





Source link

Advertisement
Continue Reading

Boston, MA

Former BYU star Clayton Young crushes lifetime best in Boston — on short notice

Published

on

Former BYU star Clayton Young crushes lifetime best in Boston — on short notice


SALT LAKE CITY — Up until the past month or so, Clayton Young wasn’t sure if he’d make it to the starting line of the 130th Boston Marathon.

By Monday afternoon, he was walking away from the course with a stunning new personal best.

Young finished the 26.2-mile point-to-point course in a personal-record time of 2 hours, 5 minutes and 41 seconds Monday, good for 11th place in an all-time year. Zouhair Talbi ran the fastest time ever by an American, finishing fifth overall in 2:03:45 and Jess McClain broken the American women’s record in 2:20:49.

In all, seven American men and 12 American women finished in the top 20 of the prestigious marathon — including Young, whose streak of six consecutive top-10 finishes dating back to 2023 (including the Paris Olympics) ended, albeit barely.

Advertisement

But donning the No. 24 bib and a brand-new kit for new sponsor Brooks, the former BYU national champion who prepped at American Fork High jumped into the lead pack from the start and never looked back as he broke his previous lifetime best set from the 2023 Chicago marathon and the Olympic trials nearly a year later by close to 3 seconds.

“With only nine weeks of training. … I was really happy to be a 2:05 guy,” Young told FloTrack after the race. “Obviously, falling outside the top 10 is a little disappointing, but I’m really happy with the time.”

The final finish was only the faintest disappointment in the incredibly fast field.

Young’s finish as the third fastest American on Monday marks the fifth-fastest time by an American man all-time in Boston. Charles Hicks finished 50 seconds behind Talbi in 2:04:35, with Young coming in just over a minute later to cheers of friends and family.

His former BYU teammate, Canadian international Rory Linkletter, finished 14th with a personal-best time of 2:06:04. Former BYU runner Michael Ottesen finished 52nd in 2:16:06, and Utah resident Todd Garner finished his 11th running of the Boston Marathon all-time in 3:14:35.

Advertisement

“I think we’re in an era in distance running, on the men and women’s sides, but especially the women’s side, where we’re all making each other so much better every time we line up with one another,” McClain told the Associated Press. “And I think it’s just going to get stronger and stronger.”

Former Utah Valley and BYU runner Kodi Kleven finished 14th in the women’s race with a personal-best time of 2:24:48. The three-time St. George marathon course record holder from Mount Pleasant led for large portions of the race en route to her qualifying time for the 2026 U.S. Olympic marathon trials.

Former BYU standout and Utah State coach Madey Dickson, who also runs trains locally with Run Elite Program, beat her previous personal record in 2:28:12 — good for 18th in the women’s race.

The Key Takeaways for this article were generated with the assistance of large language models and reviewed by our editorial team. The article, itself, is solely human-written.





Source link

Advertisement
Continue Reading

Boston, MA

Tools for Your To Do List with Spot and Gemini Robotics | Boston Dynamics

Published

on

Tools for Your To Do List with Spot and Gemini Robotics | Boston Dynamics


For an industrial robot built for the rigors of factories and power plants, tidying up a living room may seem like a light day at the office for Spot. Yet, a recent video of the robot picking up shoes and soda cans in a residential home represents the promise of AI models in robotics. In this case, Google’s visual-language model (VLM) Gemini Robotics-ER 1.5 was empowering Spot with embodied reasoning.

This particular demo grew out of a 2025 hackathon at Boston Dynamics that built on prior projects using Large Language Models (LLMs) and Visual Foundation Models (VFMs) to enable Spot to contextualize its environment and engage in more complex autonomous actions than a typical Autowalk mission. Rather than write formal software logic or a “state machine” program that defines each step of a given task, we interacted with Gemini Robotics using conversational language. In turn, it communicated with Spot on our behalf.

A Robust SDK and Natural Language Prompts Save Time

Using Spot’s SDK, we developed a layer that facilitated interaction between Gemini Robotics and Spot’s application programming interface (API). The API normally gives developers access to the robot’s capabilities to create custom applications or behaviors. For example, researchers at Meta have used Spot to test how an AI system could locate and retrieve objects it had never seen before.

Advertisement

Our ability to engage Gemini Robotics using natural language prompts was a huge timesaver, compared to traditional programming. We told Gemini Robotics it had access to a mobile robot equipped with cameras and a robotic arm. It also had a finite set of tools it could use to control the robot. A tool is a lightweight script that performs some internal logic and translates inputs from Gemini Robotics to actual API calls. We limited the actions to navigating between locations, capturing images, identifying objects, grasping them, and placing them somewhere else. 

The extent of our SDK means there are great examples one could leverage to add more access to the API with minimal development.

Giving Gemini Robotics a Baseline

To start we needed to explain to Gemini Robotics what we wanted it to do. We did experience a learning curve when writing these baseline prompts. Simple instructions like “put down an object” or “take a picture” weren’t detailed enough to produce expected behavior. We had to add context in our descriptions as we refined each tool. 

A good example is the detailed prompt for the “TakePicture” tool:

This command will cause the robot to take a picture with the specified camera. There is some nuance to choosing the correct camera. Once arriving at a location using GoTo, you should always start by taking a picture with the gripper camera, because it's the most informative.
If the robot has arrived at location and is already holding an object, you can do one of two things:
1. Immediately call PutDown
2. Search the area with either of the front cameras. The front cameras are low to the ground, so if you're trying to put things on an elevated surface, they won't give you useful information.

In this example, we gave Gemini Robotics no detailed description of the robot’s chassis or arm. Instead, we simply explained that Spot’s front cameras would be too low to photograph objects on elevated surfaces. We were able to iterate rapidly, as small changes in wording produced noticeably better results. Once it had this set of basic tools through the API, Gemini Robotics could sequence Spot’s actions and follow the handwritten instructions on a whiteboard on the day of the demonstration.

Advertisement

How Gemini Robotics and Spot Collaborate

Until the robot powers on, Gemini Robotics has no context for what specific tasks we might ask it to perform in a given demo. We only provided simple written instructions, such as, “Make sure all of the shoes at the front door are on the shoe rack.” Gemini Robotics evaluated images from Spot’s cameras and identified objects in the scene that matched the instructions. These objects became the reference points for Spot’s navigational and manipulation systems.

In many respects, Gemini Robotics was identical to an operator manually driving Spot using its tablet controller. For example, to pick up an object with Spot, an operator positions the robot near the object and then uses a grasp wizard to identify the target object. The operator provides high-level direction and Spot figures out the exact details. In this demonstration, Gemini Robotics functioned as both the operator and the tablet sending commands to the robot. This freed us up to act more like a team lead, providing a high-level to-do list and trusting Spot and Gemini Robotics do the rest.

Call and Response

When Gemini Robotics engages a given tool, the tool responds with results and context, such as, “I picked up the object,” or “I can’t pick up something while my hand is full.” Gemini Robotics then makes adjustments on the fly based on this feedback from Spot. For example, to pick up shoes, Gemini Robotics requests an image, identifies the shoes in that image, and calls the “pickup” command. By creating fundamental tools that semantically flow in conversation,  Gemini Robotics can manage the sequence of tasks required to clean up the room. Spot’s existing software stack manages the locomotion, navigation, and manipulation of the robot itself.

It’s important to note Gemini Robotics has strict boundaries in this scenario. It can’t invent new capabilities or control Spot beyond what is available through the API. This keeps Spot’s behavior predictable, while still allowing Gemini Robotics to adapt to different situations.

A Force Multiplier for Developers

For developers already working with Spot, this research has tremendous potential. Through Spot’s SDK, they have access to a robust toolkit of capabilities. Companies use these tools today to build applications for inspection, research, and industrial data analysis, among others.

An AI model like Gemini Robotics offers a way to expand those applications more rapidly. Rather than write extensive task logic on top of Spot’s APIs, developers can experiment with having AI systems interpret natural language instructions and dynamically choose to engage the robot. As a result, models like Gemini Robotics can act as force multipliers, amplifying the reliable toolkit and robust performance that is already delivering value for Boston Dynamics customers.

Advertisement

Our Next-Token Prediction for Spot and Gemini Robotics

Although this is still an experimental step and not a hardened application, it illustrates a compelling direction for robotics and physical AI. Robots like Spot are already extremely capable of navigating complex and changeable environments, collecting data and sensor readings, and manipulating objects. Rather than reinventing the wheel, AI foundation models offer a new way to expand these capabilities in new settings and to new applications.

Physical AI is a rapidly evolving field and our team is leading the way in the lab and in real applications of AI empowered robots. While we are early in our formal partnership with Google Deepmind, we’re excited for what the future holds with Atlas and we’ve already rolled out practical enhancements for Spot and Orbit, with AIVI-Learning powered by Google Gemini Robotics ER 1.6. This next evolution of our AI Visual Inspection tool unlocks a new level of visual intelligence, as users benefit from shared expertise bringing a deeper level of contextual intelligence to Spot and Orbit. Model improvements automatically happen behind the scenes, adding more capabilities to the same software and hardware.

Today, this demo points to a future where users can rely more on natural language to guide Spot’s actions, rather than complex code. The engineer’s role shifts toward setting goals and objectives. The multi-modal robot foundation model interprets the instructions to form complex and adaptive plans and Spot executes the action.

This article was contributed by Issac Ross and Nikhil Devraj, engineers on the Spot team.

Advertisement



Source link

Continue Reading
Advertisement

Trending