Most AI projects I see are just “we plugged GPT into a front end.” This sounds different, but I’m not fully tracking why.
You’re right that a lot of projects are basically a wrapper around a big model. BGI 2025 pushed teams to go further.
-
Agentic workflows mean the system isn’t just answering a single prompt; it’s acting as an agent: gathering context, taking steps, checking conditions, maybe coordinating multiple tools or services.
-
Symbolic reasoning (e.g., MeTTa) brings explicit rules and logic into the mix: if X, then Y; check source A vs source B; justify why you reached conclusion C.
Why this matters:
-
It improves explainability. When an AI system can show “here’s what I did and why,” you can audit it.
-
It helps communities maintain it. You don’t need to retrain a huge model to change a safety rule—you can update the symbolic logic.
-
It matches the decentralization ethos. Transparent reasoning is easier to share, fork, and govern across networks, not just inside a single company.
So yes, LLMs are still in the mix, but the hackathon encouraged people to build AI that acts, reasons, and explains—rather than just autocomplete text.