Qua

Reader

A list of all postings of all blogs on Qua where the author has configured the blog to publish on this page.

from 工人小记

同事YQ:我买一件衣服居然是从鸡屁股那块发货的,我说怎么路上走第三天了还没到手呢。 我:现在是不是还在四川? Y:!你怎么知道? 我:因为“蜀道难,难于上青天”

 
阅读更多

from veer66

I recently wanted to learn about MCP (Model Context Protocol). As someone whose default programming language is Common Lisp, I naturally decided to build an MCP server using Lisp.

Thanks to the creators of 40ants-mcp, the library provides a nice pattern and code structure that I really like. However, I struggled significantly with installation and getting started. What should have taken minutes ended up taking days.

I'm sharing my experience here so that others who want to build MCP servers in Common Lisp can get started in minutes, not days like I did.

Prerequisites

Before you begin, make sure you have the following installed:

  • SBCL – A high-performance Common Lisp compiler
  • Roswell – A Common Lisp implementation manager and script runner
  • Quicklisp – The de facto package manager for Common Lisp
  • Ultralisp – A community-driven distribution of Common Lisp libraries

Installing Ultralisp

In SBCL with Quicklisp, you can enable Ultralisp by:

(ql-dist:install-dist "http://dist.ultralisp.org/" :prompt nil)

(How to install SBCL and Quicklisp is in the appendix.)

The Gotcha: Loading 40ants-mcp

Here's the issue that cost me days: when you try to load 40ants-mcp with:

(ql:quickload :40ants-mcp)

You might encounter errors. The solution is simple but not obvious—load jsonrpc first:

(ql:quickload :jsonrpc)
(ql:quickload :40ants-mcp)

This dependency isn't automatically resolved, which was the source of my frustration.

Creating Your MCP Server

Here's a minimal example from my mcp-exper package:

(in-package :mcp-exper)

(openrpc-server:define-api (mi-tools :title "mi-tools"))

(40ants-mcp/tools:define-tool (mi-tools add) (a b)
  (:summary "just add")
  (:param a integer "a")
  (:param b integer "b")
  (:result text-content)
  (make-instance 'text-content :text (format nil "~a" (+ a b))))

(defun start-server ()
  (40ants-mcp/server/definition:start-server mi-tools))

Key points:

  1. Use openrpc-server:define-api to define your API
  2. Use 40ants-mcp/tools:define-tool to define tools
  3. Return text-content instances for text results (MCP requires specific content types)

Running the Server

Create a Roswell script (mi-mcp-server.ros):

#!/bin/sh
#|-*- mode:lisp -*-|#
exec ros -Q -- $0 "$@"
|#
(progn
  (ros:ensure-asdf)
  #+quicklisp(ql:quickload '(:mcp-exper) :silent t))

(defun main (&rest argv)
  (declare (ignorable argv))
  (mcp-exper:start-server))

Quick Test

Run directly with Roswell:

ros mi-mcp-server.ros

Production Installation

Build and install as an executable:

ros build mi-mcp-server.ros
install -m 0755 mi-mcp-server $HOME/.local/bin/

Make sure $HOME/.local/bin is in your PATH.

Integrating with Opencode

To enable your MCP server in opencode, add this to ~/.config/opencode/opencode.json:

{
    "mcp": {
        "mi-tools": {
            "type": "local",
            "command": ["mi-mcp-server"],
            "enabled": true
        }
    }
}

Conclusion

Building MCP servers with Common Lisp is straightforward once you know the tricks. The 40ants-mcp library is well-designed, and the OpenRPC integration works smoothly.

I hope this guide saves you the days of frustration I experienced. Happy hacking!


The full source code for this example is available at mcp-exper.

Appendix: Installing SBCL

macOS

brew install sbcl

Debian/Ubuntu

apt install sbcl

Arch Linux

pacman -S sbcl

Appendix: Installing Quicklisp

Download and install Quicklisp:

wget https://beta.quicklisp.org/quicklisp.lisp
sbcl --load quicklisp.lisp \
        --eval '(quicklisp-quickstart:install)' \
        --eval '(ql-util:without-prompting (ql:add-to-init-file))' \
        --quit
 
Read more...

from veer66

I recently wanted to learn about MCP (Model Context Protocol). As someone whose default programming language is Common Lisp, I naturally decided to build an MCP server using Lisp.

Thanks to the creators of 40ants-mcp, the library provides a nice pattern and code structure that I really like. However, I struggled significantly with installation and getting started. What should have taken minutes ended up taking days.

I'm sharing my experience here so that others who want to build MCP servers in Common Lisp can get started in minutes, not days like I did.

Prerequisites

Before you begin, make sure you have the following installed:

  • SBCL – A high-performance Common Lisp compiler
  • Roswell – A Common Lisp implementation manager and script runner
  • Quicklisp – The de facto package manager for Common Lisp
  • Ultralisp – A community-driven distribution of Common Lisp libraries

Installing Ultralisp

In SBCL with Quicklisp, you can enable Ultralisp by:

(ql-dist:install-dist "http://dist.ultralisp.org/" :prompt nil)

(How to install SBCL and Quicklisp is in the appendix.)

The Gotcha: Loading 40ants-mcp

Here's the issue that cost me days: when you try to load 40ants-mcp with:

(ql:quickload :40ants-mcp)

You might encounter errors. The solution is simple but not obvious—load jsonrpc first:

(ql:quickload :jsonrpc)
(ql:quickload :40ants-mcp)

This dependency isn't automatically resolved, which was the source of my frustration.

Creating Your MCP Server

Here's a minimal example from my mcp-exper package:

(in-package :mcp-exper)

(openrpc-server:define-api (mi-tools :title "mi-tools"))

(40ants-mcp/tools:define-tool (mi-tools add) (a b)
  (:summary "just add")
  (:param a integer "a")
  (:param b integer "b")
  (:result text-content)
  (make-instance 'text-content :text (format nil "~a" (+ a b))))

(defun start-server ()
  (40ants-mcp/server/definition:start-server mi-tools))

Key points:

  1. Use openrpc-server:define-api to define your API
  2. Use 40ants-mcp/tools:define-tool to define tools
  3. Return text-content instances for text results (MCP requires specific content types)

Running the Server

Create a Roswell script (mi-mcp-server.ros):

#!/bin/sh
#|-*- mode:lisp -*-|#
exec ros -Q -- $0 "$@"
|#
(progn
  (ros:ensure-asdf)
  #+quicklisp(ql:quickload '(:mcp-exper) :silent t))

(defun main (&rest argv)
  (declare (ignorable argv))
  (mcp-exper:start-server))

Quick Test

Run directly with Roswell:

ros mi-mcp-server.ros

Production Installation

Build and install as an executable:

ros build mi-mcp-server.ros
install -m 0755 mi-mcp-server $HOME/.local/bin/

Make sure $HOME/.local/bin is in your PATH.

Integrating with Opencode

To enable your MCP server in opencode, add this to ~/.config/opencode/opencode.json:

{
    "mcp": {
        "mi-tools": {
            "type": "local",
            "command": ["mi-mcp-server"],
            "enabled": true
        }
    }
}

Conclusion

Building MCP servers with Common Lisp is straightforward once you know the tricks. The 40ants-mcp library is well-designed, and the OpenRPC integration works smoothly.

I hope this guide saves you the days of frustration I experienced. Happy hacking!


The full source code for this example is available at mcp-exper.

Appendix: Installing SBCL

macOS

brew install sbcl

Debian/Ubuntu

apt install sbcl

Arch Linux

pacman -S sbcl

Appendix: Installing Quicklisp

Download and install Quicklisp:

wget https://beta.quicklisp.org/quicklisp.lisp
sbcl --load quicklisp.lisp \
        --eval '(quicklisp-quickstart:install)' \
        --eval '(ql-util:without-prompting (ql:add-to-init-file))' \
        --quit
 
Read more...

from 工人小记

男同事YBk问Qy如何找到某文件, Q:WSy调动移交时候把往年文件全发了,你输入关键字搜一下就有 Y:我得每个点进去看吗 Q:可能要吧 Y:谁搞这事谁来搜呗,太麻烦了 Q:不然还要找W来? Y:现在是你要收文件,你自己搜 Q:我收大家发来的,之后汇总,你得先筛完了再给我 Y:骂人 Q:你再说一遍 就这么扔东西打起来了 :blobcatshocked:

女同事LB:下发操作手册,Y自己不学,交得乱七八糟,全指望别人改好

以我个人的体验是 热天的时候公司楼里开“中央空调”,有明确规定“不能在开空调同时开窗”(以防浪费?) Y:开窗 我:不是规定说不能在开空调的同时开窗吗?一会儿到中午还没制够冷,又得开大空调,那么吵。你不睡,别人还想睡呢 Y:你别多嘴 我:怕人说就别做,自己有问题还想倒打一耙啊

 
阅读更多

from clo.gnz

Le concept de Verre rempli à moitié ne parle ni de verre ni d'eau. Tout comme celui de Croissance négative ne parle pas de croissance, ou celui de Température ressentie ne parle pas de température.

... Ce sont les notes des nouvelles musiques, qui n'ont rien à voir avec la Musique.

 
Lire la suite...

from mattesilver

A set of workshop tools

I've been looking for a tool to manage development tools like linters, test runners, etc. The tools are typically development tools, so for example pipx would qualify.

A tool manager should make the environment reproducible and portable. The installed tools should work not only on my machine as well as on machine of everyone else who works on the same project, and they should not conflict with other tools anyone has installed on their systems.

Ideally the tool would have a declarative configuration, so it's clear what's used and easy to upgrade.

The problem with task managers (that aren't also tool managers) is that they don't control the tools, so a task that works on one system may fail on another. Worse, a task that works at one point in time may fail at another, because of a new tool version. So task manager should be either also a tool manager or it should be used with one.

On the other hand, managing tools is also a big subject, it's pretty much a package manager, needing some sort of a descriptor for a lot of tools in various languages, that need to be installable on various systems and architectures.

In short, the requirements boil down to:

  • declarative tool SBOM
  • tool isolation (but with access to project dependencies)
  • task management

Installing tools directly in the system

The old way, where the user installs tools following the project readme. Not declarative, not reproducible, definitely not isolated.

❌ declarative ❌ isolated ❌ task management

Declarative package managers – Nix, Guix

Environments are reproducible, declarative (both use functional languages). Rather difficult to use and very intrusive – you pretty much install another operating system in your operating system. I don't see any technical obstacle from making them single-user application, but they chose against it. Unless you already use one of them, it's a total overkill.

Maybe they work better with dev containers, but even then there are alternatives.

✅ declarative ✅ isolated ❌ task management

pipx

pip for executable packages. Supports installation of multiple versions of the same tool, but there's no way to activate the required versions for a given project. And it only works for python tools (and those with a python wrapper).

❌ declarative ❌ isolated (tools are isolated from each other, but shared across projects) ❌ task management

uvx, npx, pnpx, etc

Set of tools that let you run their respective languages projects as programs. Each works only with tools in its language, but it's not the biggest problem, since typically the tools are written in the same language, or have wrappers.

They arent really designed to manage a toolbox, rather for one-off execution. There's no declarative versioning.

uvx is a part of uv, which is a package manager, and also python version manager, but somehow tool manager is out of scope 🤷.

❌ declarative ✅ isolated ❌ task management

Tools as dev dependencies

It's really not what dev dependencies are meant for: everything is installed in a shared environment (venv, nodemodules). In the nodejs environment this should be fine, every package maintains its own dependencies, even if versions differ – that's why you end up with huge nodemodules. In python you might end up with conflicting dependencies (this somehow doesn't stop poetry project to use dev dependency group this way). Again, they only work for tools written in the same language as the project.

✅ declarative ❌ isolated ❓ task management (not in python, package.json has scripts)

pre-commit

Technically ticks many boxes, but not really intended as a dev tool manager.

Uses git repositories as a distributed source of packages, keeps them isolated and cached per project. Versions are fully declarative – you can even pin full SHA hashes.

This is all great, but it's literal git's pre-commit utility, which means it's designed for fast and predictable execution. It's not intended as a general purpose tool manager – it doesn't provide the tools with full project environment (it doesn't include dependencies), it doesn't support test runners (also because testing is too slow for a pre-commit hook) and even support for some linters (see pythons mypy) is suboptimal (again, mypy works best with access to dependencies).

✅ declarative ✅ isolated (actually too isolated for my purpose) ❌ task management

Mise

Works both as a package manager (tools) and task manager (replacement for package.json scripts, rather than for make). The only tool with the express goal of being a tool and task manager.

It doesn't handle tools with plugins well (e.g. pydantic.mypy) – it uses a shared directory for installs, but only tool name and version are used for tools “identity”. which means if you have two projects, both using mypy, one using pydantic with mypy plugin, there's a conflict (there's a workaround to this, but not good for collaboration).

It also does funny things to $PATH so it doesn't work great with other tools, like IDEs or pre-commit.

✅ declarative ❌ isolated (buggy) ✅ task management

Dev containers

This isn't even a tool, it's simply a container (docker, podman, you name it) that has your tools installed and has access to your project directory.

It's great for isolation and reproducibility, as long your installs are versioned. The isolation makes curl | sudo sh - a bit more acceptable.

I haven't worked with it, but the concept seems very reasonable, especially for occasional build of applications that you only use (but not develop).

Conclusion

I don't see here a perfect tool, and I'm not even looking for a polyglot multi-project / monorepo manager.

Mise comes very close, and I'm currently using it but it has a problem with isolation and it doesn't seem interested in fixing it, so I wouldn't recommend it on it's own in a collaborative environment.

I'll take a closer look at mise in a dev container. Dev containers would provide the isolation and mise covers declarative tool and task management quite well.

 
Read more...

from clo.gnz

On ne peut pas toujours esquiver Le Système. Il faut bien de temps en temps se décider à le confronter.

Ne serait-ce que pour confirmer qu'on n'est pas en train de confondre, la force émancipatrice du nouveau, avec la peur de mettre à nu ses faiblesses dans l'ancien.

 
Lire la suite...

from veer66

In the context of programming in the 1980s, “global variables” likely brings to mind languages like MBASIC. However, using MBASIC as an example today would be challenging, as it is now rarely used or known. Instead, GNU Bash, which is the default shell scripting language for many systems—will be used to illustrate what global variables were traditionally like. Anyway, some might think of Fortran II, but I'm not familiar with it.

Bash Example

I wrote a Bash script consisting of four files:

a.bash

a.bash orchestrates everything. “Orchestration” might be too grand a word for these toy scripts, but I want to convey that it performs a role similar to Apache Airflow.

#!/bin/bash

. ./b.bash
. ./c.bash
. ./d.bash

init
print_count
inc
print_count

b.bash

b.bash contains only one function, inc, which increments the counter, which is the global variable in this example.

inc() {
    counter=$((counter + 1))
}

c.bash

c.bash contains print_count, which simply displays the value of the global variable counter.

print_count() {
    echo $counter
}

d.bash

In Bash, counter can be initialized globally from within a function by default.

init() {
    counter=1
}

Python Example

The following section shows the result of porting the Bash script above to Python, highlighting the key differences.

a.py

This is the orchestration part. Note the use of namespaces or module names.

import c, b, d

d.init()
c.print_count()
b.inc()
c.print_count()

b.py

In the Python version, inc must refer to the module d to access the variable. Alternatively, counter could be explicitly imported.

import d

def inc():
    d.counter += 1

c.py

Similarly, print_count in Python must also refer to module d.

import d

def print_count():
    print(d.counter)

d.py

Unlike in Bash, initializing a global variable from within a function—even in the same module—requires an explicit global declaration.

def init():
    global counter
    counter = 1

Key Differences

As you can see, a global variable in Bash is truly global across files. In Python, however, a global variable is only global within its module, i.e., file. Furthermore, mutating a global variable in Bash requires no special syntax, whereas in Python a function must explicitly declare global to modify a module-level variable.

Consequently, although both are called “global variables,” Python's are scoped to the module. This means they won’t interfere with variables in other modules unless we deliberately make them do so. For developers who use one class per module, a Python global variable behaves much like a class variable. Additionally, variable assignment inside a Python function is local by default, preventing accidental modification of global state unless explicitly intended.

In short, many traditional precautions about global variables in languages like Bash or MBASIC no longer apply in Python. Therefore, we might reconsider automatically rejecting global variables based on past advice and instead evaluate their use case thoughtfully.

 
Read more...

from veer66

I was impressed by pkgsrc because it is a powerful package management system that defines packages using a common tool like a Makefile. I later discovered that the PKGBUILD file is even more impressive, as its purpose is immediately clear. I initially attributed this to my greater familiarity with Bash scripting compared to Makefiles, but I now believe the true reason is PKGBUILD's level of abstraction.

It retains explicit calls to configure and make, preserving the transparency of a manual installation. This demonstrates that while increased abstraction can make code shorter, it can also hinder understanding.

Another potential advantage of greater abstraction is the ability to change the configure command for every package by modifying just one location. However, since GNU Autotools has continued to use the configure command for decades, it may not be worth sacrificing clarity for this particular benefit.

 
Read more...

from veer66

(Posted at dev.to on 2025-09-02)

Expanding Emacs functionality is as simple as defining a new function instead of creating an entire extension package, as is often done in many other extensible editors. This function can then be re-evaluated, tested, and modified entirely within Emacs using just a few clicks or keyboard shortcuts, with no need to restart or reload Emacs.

 
Read more...

from mattesilver

I'm developing a small package for writing typed Web API clients, similar to Uplink, only compatible with OpenAPI (lapidary). I use these clients mostly in Web servers, so I decided to make it async. So far I've settled on HTTPX, but I want to keep tabs on available python packages.

urllib.request

See also The Requests package is recommended for a higher-level HTTP client interface.

It accepts request headers only as a dict, and I've decided that it doesn't count as support for HTTP

❌ HTTP/1.1 ❌ HTTP/2 ❌ HTTP/3 ❌ asyncio

urllib3

Fast and robust, slowly acquiring HTTP/2 support, but only sync.

✅ HTTP/1.1 ❌ HTTP/2 ❌ HTTP/3 ❌ asyncio

requests

This the one every python example tells you to use. It's under a feature freeze since forever – HTTP moves on but requests is as perfect as possible. It also “broke the internet” when it moved to urllib3 2.0.0, which dropped support for older TLS algorithms, in a minor version release. As long as the API doesn't change in an incompatible way, right?

✅ HTTP/1.1 ❌ HTTP/2 ❌ HTTP/3 ❌ asyncio

pycURL

It's super fast, since it's super thin wrapper over libcURL.

Also a great showcase of how not to write python wrappers. Might as well use ctypes.

✅ HTTP/1.1 ✅ HTTP/2 ✅ HTTP/3 (I think it's supported, libcURL says it's experimental) ❌ asyncio but non-blocking, since it's a native code.

HTTPX

The most popular async HTTP client package for python. Supports HTTP/2 but claims it's “not as robust” as 1.1. Both sync and async.

API is mostly compatible with requests (it's not a feature). Depending on usage pattern, it's slightly slower or much slower than aiohttp.

✅ HTTP/1.1 ✅ HTTP/2 (discouraged, opt-in) ❌ HTTP/3 ✅ sync & async

aiohttp

API is close to the protocol, so a bit more complex than alternatives, but I like it. One of the fastest packages in python.

The community seems more focused on the server side (more features and plugins).

✅ HTTP/1.1 ❌ HTTP/2 ❌ HTTP/3 ✅ asyncio-only

Aiosonic

Looks and feels like HTTPX.

Depending on usage pattern, it's either a bit slower or a bit faster than aiohttp. Probably there's something wrong with it, because nobody seems to be using it. Definitely worth a closer look.

✅ HTTP/1.1 ✅ HTTP/2 – beta ❌ HTTP/3 ✅ sync & asyncio

Aioquic

Really low level HTTP/3 & QUIC package. Only for genuine H3 users. Could use better examples – example client has 600+ lines. Or maybe it's that hard to use...

❌ HTTP/1.1 ❌ HTTP/2 ✅ HTTP/3 ✅ asyncio-only

Niquests

Async fork of requests with HTTP/2 and 3. Fast, full featured but not user-friendly.

It uses a forked urllib3 (by the same author) that shadows it, so if you happen to have a transitive dependency on the original, it will mess up your environment. The author is fine with it. I recommend against using it.

✅ HTTP/1.1 ✅ HTTP/2 ✅ HTTP/3 ✅ sync & asyncio

Reqwest wrappers

This makes a lot of sense – Rust is a popular language, reqwest is a popular rust package, keep it DRY. None of the wrappers are widely popular, but some projects are live and definitely worth watching. – gufo-http – I think it's the oldest and still maintained – pyreqwesthttpr

Found a few more, just search reqwest in pypi.

✅ HTTP/1.1 ✅ HTTP/2 ❌ HTTP/3 – maybe, it's still experimental in reqwest ✅ sync and asyncio

Conclusion

Honestly, the clients for the protocol that makes the Web makes me wonder is Python still alive? There isn't a single popular and stable package with HTTP/2 (2015!) or HTTP/3 (2022) either sync or async. Is Python only for Django and LLM? Have everyone else switch to Rust and/or Deno?

 
Read more...

from mattesilver

Based on the Litestar Fullstack Inertia showcase project.

Prerequisites

For social authorization to work, you first need to register your application with the identity provider (e.g., GitHub).

Implementation

Frontend

You need a login or registration page with a button/link that sends the browser to a backend route which starts the OAuth2 flow. For social login, it doesn’t matter whether you call it “login” or “registration”, as the first login typically creates the user.

Implementation details: – In the showcase project it’s implemented in partials: – Login formRegistration form – Both partials use the same button, which handles clicks with router.post(route("github.register")). – route() is an Inertia.js helper function which simply resolves handler names to paths. – A plain link and HTTP GET would also work here, since no data is sent at this stage.

Login/Register handler (backend)

Create a route that redirects to the provider’s authorization URL and includes a callback pointing to your completion handler. For production you should generate OAuth2 state and PKCE code verifier, and store them in the session.

Implementation details: – See the authorization controller – You can use httpx_oauth to get auth-related URLs for popular OAuth2 providers.

Authorization callback handler (backend)

Your callback (named “github.complete” in the showcase) should:

  1. Validate the store OAuth2 state (required for security; missing in the showcase)
  2. Exchange the authorization code for an access token
  3. Create a user object in the local database and store the user Id in the session
  4. Redirect to a post-login page (e.g., dashboard)

Optionally: – Fetch user details (e.g., email) from the provider
– Provide user data to the frontend

Implementation details: – The showcase doesn't use OAuth2 state or PKCE – GitHubOAuth2.get_access_token is used to exchange the auth code for a token (wrapped as a Litestar dependency, to facilitate re-use) – GitHubOAuth2.get_id_email is used to retrieve users email – Inertia’s share() is used to pass user details to the frontend

© 2025 Matte Silver

 
Read more...

from Rolistologie

Dans le marasme des magouilles des macronistes et de leurs pièges tendus aux idiots utiles, réfugions-nous dans le monde imaginaire d’#INSMV (version #INSMVadlib), avec :

L’influence des Archanges et Princes Démons sur les principaux groupes de l’Assemblée Nationale française en 2025.

En première ligne et en gras, une influence plutôt active, en deuxième ligne une implication de type soutien. Je n’ai pas précisé le niveau d’investissement, ni les raisons, afin de maintenir les secrets vis à vis des joueurs, mais si des MJs veulent en discuter, c’est ouvert. Évidemment, certains acteurs sont impliqués dans plusieurs groupes politiques, mais pour des raisons de simplification et de clarté, je ne les ai positionnés que sur le groupe principal.

PCF : Haagenti, Kobal 

LFI : Alain, DajjâlGuy, Furfur, Hassan, Jordi  Catherine 

Écolos : Novalis   Eli

PS : Malphas  Jean-Luc, Blandine 

Modem : Christophe  Ange

LREM : JanusMammon, UphirValefor Andromalius, Malthus, Shaytan, Vapula

LR : Marc  Caym

RN : Belial, Dominique, GeorgesLaurent, Nybbas  Joseph 

#JdR

 
Lire la suite...

from Jéssica Maryellen

Jogo da memória - Números - Nível 3

Tema: Números e Quantidades – Maternal II

Este jogo da memória foi especialmente desenvolvido para os alunos do Maternal II, com o objetivo de auxiliar no processo de reconhecimento dos números de 1 ao 9 e na associação entre número e quantidade.

O jogo é composto por seis pares de cartas, com cada par sendo formado por:

  • Uma carta com o número escrito.
  • Uma carta com a quantidade correspondente em figuras.

De forma lúdica e interativa, as crianças são incentivadas a observar, contar e associar, desenvolvendo habilidades como:

  • Reconhecimento visual dos números.
  • Noção de quantidade.
  • Atenção e memória.
Nível 1 -:-:-:– Nível 2 -:-:-:– Nível 3
 
Leia mais...

from Jéssica Maryellen

Jogo da memória - Números - Nível 2

Tema: Números e Quantidades – Maternal II

Este jogo da memória foi especialmente desenvolvido para os alunos do Maternal II, com o objetivo de auxiliar no processo de reconhecimento dos números de 1 ao 9 e na associação entre número e quantidade.

O jogo é composto por cinco pares de cartas, com cada par sendo formado por:

  • Uma carta com o número escrito.
  • Uma carta com a quantidade correspondente em figuras.

De forma lúdica e interativa, as crianças são incentivadas a observar, contar e associar, desenvolvendo habilidades como:

  • Reconhecimento visual dos números.
  • Noção de quantidade.
  • Atenção e memória.
Nível 1 -:-:-:– Nível 2 -:-:-:– Nível 3
 
Leia mais...

from Jéssica Maryellen

Jogo da memória - Números - Nível 1

Tema: Números e Quantidades – Maternal II

Este jogo da memória foi especialmente desenvolvido para os alunos do Maternal II, com o objetivo de auxiliar no processo de reconhecimento dos números de 1 ao 9 e na associação entre número e quantidade.

O jogo é composto por quatro pares de cartas, com cada par sendo formado por:

  • Uma carta com o número escrito.
  • Uma carta com a quantidade correspondente em figuras.

De forma lúdica e interativa, as crianças são incentivadas a observar, contar e associar, desenvolvendo habilidades como:

  • Reconhecimento visual dos números.
  • Noção de quantidade.
  • Atenção e memória.
Nível 1 -:-:-:– Nível 2 -:-:-:– Nível 3
 
Leia mais...