
Hosted by NovCog · EN
Whether you’re an AI enthusiast, a technology professional, or simply curious about the future of intelligent systems, the Novel Cognition AI Podcast offers valuable insights and engaging conversations. Join us as we unravel the complexities of machine learning, neural networks, natural language processing, and more.
Subscribe now to stay at the forefront of the AI revolution and gain a deeper understanding of the technology that’s shaping our world

In this essential episode for AI engineers and developers, we unpack Anthropic's Agent Skills, a groundbreaking modular architecture that is fundamentally changing how AI agents gain specialization and maintain efficiency. Skills are defined as organized folders of instructions, scripts, and resources that extend Claude’s functionality. They transform a general-purpose model into a specialized agent capable of handling complex tasks like Excel data analysis, PDF manipulation, or adhering to strict brand guidelines.We delve into the technical advantage of progressive disclosure, the system that makes Agent Skills exceptionally token-efficient. Unlike the Model Context Protocol (MCP), which can consume tens of thousands of tokens by loading entire tool schemas at startup, Skills employ a three-level loading architecture.What You Will Learn:• Token Efficiency Explained: Discover how Skills achieve near-zero token overhead by loading only lightweight metadata (Level 1) at session start (around 100 tokens per skill). Full procedural knowledge and instructions (Level 2) are only read dynamically via bash when Claude autonomously determines the Skill is relevant.• Specialization vs. Abstraction: Learn best practices for creating focused Skills—addressing one capability (e.g., "PDF form filling") rather than broad categories (e.g., "Document processing"). This clear definition is critical for ensuring Claude correctly invokes the right ability.• The Agent Control Paradigm: We discuss how the filesystem-based architecture of Skills, which enables Claude to execute pre-written scripts reliably outside of the context window, allows for deterministic and repeatable operations. This architectural control is paramount for advanced use cases, directly supporting #hiddenstatedrift coaching—strategies aimed at maintaining consistency and reliability in complex, multi-step agent workflows.• Skills and MCP: A Complementary Approach: While Skills teach Claude how to perform procedures, MCP connects Claude to external APIs and systems. We review how these two systems are designed to work together, with Skills providing the sophisticated workflow instructions for utilizing external tools accessed via MCP.--------------------------------------------------------------------------------Resources Mentioned:• For advanced strategies on leveraging specialized AI architectures and cognitive models: [NovelCogntion.ai]• For insights into AI-driven brand deployment and intelligence: [aibrandintelligence.com]#AgentSkills #ProgressiveDisclosure #LLMAgents #TokenEfficiency #ClaudeAI #MCP #hiddenstatedrift coaching #AICustomization #AgentArchitecture

In this episode of tech giants behaving badly, Anthropic shafted tens of thousands of paying users by crippling what are called Claude artifacts.With zero notice to users, Anthropic shut off visibility to user creations known as artifacts, where users could post various content on the web, shackling the utility of Claude itself. The change involved setting the artifacts to “no-index,” meaning search engines won’t show the user-generated content.It’s another example of tech company hubris, a violation of the common law “warranty of merchantability,” and a virtual bait-and-switch scheme. Anthropic customers should complain to their state attorneys general, the Federal Trade Commission, and consumer affairs groups like the Better Business Bureau. It’s quite possible that Class Action plaintiffs’ attorneys may find this a rich vein to mine.#techfraud #claude #anthropic #classaction

Join us as we dive into the most provocative new AI architecture of the season: the Baby Dragon Hatchling (BDH), launched by Pathway. BDH is being touted as the "missing link between the Transformer and Models of the Brain", promising a paradigm shift in AI development.Pathway claims that BDH, a novel "post-transformer" architecture, provides a foundation for Universal Reasoning Models by solving the "holy grail" problem of "generalization over time". The architecture is inspired by scale-free biological networks, using locally-interacting neuron particles and combining techniques like attention mechanisms and graph neural networks. We explore its unique features, including sparse and positive activation vectors, which lead to inherent interpretability, with empirical findings showing the emergence of monosemantic synapses.But is this genuine innovation, or simply posturing?The release has generated significant attention, placing BDH on the "Peak of Inflated Expectations" in the AI hype cycle. We conduct a red team analysis of the claims that have spurred fierce debate across the technical community, especially on platforms like Reddit. Skeptics point out several critical challenges:• Empirical Gaps: The promised Transformer-like performance is currently only validated against GPT-2 scale models (10M-1B parameters), failing to prove advantages at state-of-the-art scales.• Conceptual Ambiguity: The central claim of "generalization over time" lacks a precise operational definition.• Biological Oversell: Claims that BDH "explains one possible mechanism which human neurons could use to achieve speech" represent a "significant overreach" that lacks validation from modern neuroscience research.• Methodological Concerns: The rapid move from publication to major press suggests insufficient time for crucial peer review and independent replication.We discuss the long-term implications of this work on architectural diversity and AGI development pathways, and caution against the risk of misallocating research resources toward overly ambitious claims.Tune in to understand if the Dragon Hatchling will truly usher in a new era of Axiomatic AI or if scientific skepticism remains the safest policy.--------------------------------------------------------------------------------For more depth on the discussion surrounding BDH and the future of AI architectures, check out these resources:• Red Team Skepticism on Reddit: https://www.reddit.com/r/Burstiness_Perplexity/comments/1nzljhp/posttransformer_or_just_posturing_redteaming/• Analysis of the Architecture: https://nov.link/skoolAI• LinkedIn Review: https://www.linkedin.com/pulse/skeptically-looking-baby-dragon-hatchling-guerin-green-rpprc/

We analyse how people are actually using ChatGPT...READ MORE: https://nov.link/howpeopleusechatgptUnderstand how AI views your brand and business: https://aibrandintelligence.comGuerin Green and Novel Cognition analyze the data: The Rise of the "Co-pilot": At work, ChatGPT isn't just automating tasks; it's becoming a crucial tool for decision support. A new "Asking, Doing, Expressing" framework reveals that nearly half of all interactions involve users Asking for advice and information to solve problems, a trend that is growing faster than direct task completion (Doing).• Writing is King: For professionals, Writing is the number one use case, accounting for 40% of all work-related messages. Surprisingly, two-thirds of these tasks involve editing or refining existing text, not generating new content from scratch.• Demystifying the Demographics: The initial gender gap in AI adoption has vanished, with women now slightly more likely to be active users. We're also seeing explosive growth in low- and middle-income countries, signaling a truly global diffusion of this technology.• Surprising Use Cases (and a Few Myths Busted): Despite popular belief, computer programming makes up only 4.2% of messages, and companionship or personal reflection is even less common at just 1.9%. The dominant activities are Practical Guidance, Seeking Information, and Writing, which together account for nearly 80% of all conversations.

What is an "agentic supernet" and how does it differ from traditional multi-agent systems according to the provided text?Glossary and study guide at : https://link.thecherrycreeknews.com/Bleeding-EdgeWhat are the two dilemmas that MaAS aims to address regarding current automated multi-agent systems?Briefly explain how MaAS samples multi-agent architectures conditioned on input queries.

Need to know more? http://nov.link/skoolAI