Compare commits

..

61 Commits

Author SHA1 Message Date
ZiWei
5dc81ec9be bump version to 0.10.3
update registry

do not modify axis globally

Prcix9320 (#207)

* 0.10.7 Update (#101)

* Cleanup registry to be easy-understanding (#76)

* delete deprecated mock devices

* rename categories

* combine chromatographic devices

* rename rviz simulation nodes

* organic virtual devices

* parse vessel_id

* run registry completion before merge

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>

* fix: workstation handlers and vessel_id parsing

* fix: working dir error when input config path
feat: report publish topic when error

* modify default discovery_interval to 15s

* feat: add trace log level

* feat: 添加ChinWe设备控制类,支持串口通信和电机控制功能 (#79)

* fix: drop_tips not using auto resource select

* fix: discard_tips error

* fix: discard_tips

* fix: prcxi_res

* add: prcxi res
fix: startup slow

* feat: workstation example

* fix pumps and liquid_handler handle

* feat: 优化protocol node节点运行日志

* fix all protocol_compilers and remove deprecated devices

* feat: 新增use_remote_resource参数

* fix and remove redundant info

* bugfixes on organic protocols

* fix filter protocol

* fix protocol node

* 临时兼容错误的driver写法

* fix: prcxi import error

* use call_async in all service to avoid deadlock

* fix: figure_resource

* Update recipe.yaml

* add workstation template and battery example

* feat: add sk & ak

* update workstation base

* Create workstation_architecture.md

* refactor: workstation_base 重构为仅含业务逻辑,通信和子设备管理交给 ProtocolNode

* refactor: ProtocolNode→WorkstationNode

* Add:msgs.action (#83)

* update: Workstation dev 将版本号从 0.10.3 更新为 0.10.4 (#84)

* Add:msgs.action

* update: 将版本号从 0.10.3 更新为 0.10.4

* simplify resource system

* uncompleted refactor

* example for use WorkstationBase

* feat: websocket

* feat: websocket test

* feat: workstation example

* feat: action status

* fix: station自己的方法注册错误

* fix: 还原protocol node处理方法

* fix: build

* fix: missing job_id key

* ws test version 1

* ws test version 2

* ws protocol

* 增加物料关系上传日志

* 增加物料关系上传日志

* 修正物料关系上传

* 修复工站的tracker实例追踪失效问题

* 增加handle检测,增加material edge关系上传

* 修复event loop错误

* 修复edge上报错误

* 修复async错误

* 更新schema的title字段

* 主机节点信息等支持自动刷新

* 注册表编辑器

* 修复status密集发送时,消息出错

* 增加addr参数

* fix: addr param

* fix: addr param

* 取消labid 和 强制config输入

* Add action definitions for LiquidHandlerSetGroup and LiquidHandlerTransferGroup

- Created LiquidHandlerSetGroup.action with fields for group name, wells, and volumes.
- Created LiquidHandlerTransferGroup.action with fields for source and target group names and unit volume.
- Both actions include response fields for return information and success status.

* Add LiquidHandlerSetGroup and LiquidHandlerTransferGroup actions to CMakeLists

* Add set_group and transfer_group methods to PRCXI9300Handler and update liquid_handler.yaml

* result_info改为字典类型

* 新增uat的地址替换

* runze multiple pump support

(cherry picked from commit 49354fcf39)

* remove runze multiple software obtainer

(cherry picked from commit 8bcc92a394)

* support multiple backbone

(cherry picked from commit 4771ff2347)

* Update runze pump format

* Correct runze multiple backbone

* Update runze_multiple_backbone

* Correct runze pump multiple receive method.

* Correct runze pump multiple receive method.

* 对于PRCXI9320的transfer_group,一对多和多对多

* 移除MQTT,更新launch文档,提供注册表示例文件,更新到0.10.5

* fix import error

* fix dupe upload registry

* refactor ws client

* add server timeout

* Fix: run-column with correct vessel id (#86)

* fix run_column

* Update run_column_protocol.py

(cherry picked from commit e5aa4d940a)

* resource_update use resource_add

* 新增版位推荐功能

* 重新规定了版位推荐的入参

* update registry with nested obj

* fix protocol node log_message, added create_resource return value

* fix protocol node log_message, added create_resource return value

* try fix add protocol

* fix resource_add

* 修复移液站错误的aspirate注册表

* Feature/xprbalance-zhida (#80)

* feat(devices): add Zhida GC/MS pretreatment automation workstation

* feat(devices): add mettler_toledo xpr balance

* balance

* 重新补全zhida注册表

* PRCXI9320 json

* PRCXI9320 json

* PRCXI9320 json

* fix resource download

* remove class for resource

* bump version to 0.10.6

* 更新所有注册表

* 修复protocolnode的兼容性

* 修复protocolnode的兼容性

* Update install md

* Add Defaultlayout

* 更新物料接口

* fix dict to tree/nested-dict converter

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* frontend_docs

* create/update resources with POST/PUT for big amount/ small amount data

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

* Workstation templates: Resources and its CRUD, and workstation tasks (#95)

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

---------

Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>

* 更新物料接口

* Workstation dev yb2 (#100)

* Refactor and extend reaction station action messages

* Refactor dispensing station tasks to enhance parameter clarity and add batch processing capabilities

- Updated `create_90_10_vial_feeding_task` to include detailed parameters for 90%/10% vial feeding, improving clarity and usability.
- Introduced `create_batch_90_10_vial_feeding_task` for batch processing of 90%/10% vial feeding tasks with JSON formatted input.
- Added `create_batch_diamine_solution_task` for batch preparation of diamine solution, also utilizing JSON formatted input.
- Refined `create_diamine_solution_task` to include additional parameters for better task configuration.
- Enhanced schema descriptions and default values for improved user guidance.

* 修复to_plr_resources

* add update remove

* 支持选择器注册表自动生成
支持转运物料

* 修复资源添加

* 修复transfer_resource_to_another生成

* 更新transfer_resource_to_another参数,支持spot入参

* 新增test_resource动作

* fix host_node error

* fix host_node test_resource error

* fix host_node test_resource error

* 过滤本地动作

* 移动内部action以兼容host node

* 修复同步任务报错不显示的bug

* feat: 允许返回非本节点物料,后面可以通过decoration进行区分,就不进行warning了

* update todo

* modify bioyond/plr converter, bioyond resource registry, and tests

* pass the tests

* update todo

* add conda-pack-build.yml

* add auto install script for conda-pack-build.yml

(cherry picked from commit 172599adcf)

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* Add version in __init__.py
Update conda-pack-build.yml
Add create_zip_archive.py

* Update conda-pack-build.yml

* Update conda-pack-build.yml (with mamba)

* Update conda-pack-build.yml

* Fix FileNotFoundError

* Try fix 'charmap' codec can't encode characters in position 16-23: character maps to <undefined>

* Fix unilabos msgs search error

* Fix environment_check.py

* Update recipe.yaml

* Update registry. Update uuid loop figure method. Update install docs.

* Fix nested conda pack

* Fix one-key installation path error

* Bump version to 0.10.7

* Workshop bj (#99)

* Add LaiYu Liquid device integration and tests

Introduce LaiYu Liquid device implementation, including backend, controllers, drivers, configuration, and resource files. Add hardware connection, tip pickup, and simplified test scripts, as well as experiment and registry configuration for LaiYu Liquid. Documentation and .gitignore for the device are also included.

* feat(LaiYu_Liquid): 重构设备模块结构并添加硬件文档

refactor: 重新组织LaiYu_Liquid模块目录结构
docs: 添加SOPA移液器和步进电机控制指令文档
fix: 修正设备配置中的最大体积默认值
test: 新增工作台配置测试用例
chore: 删除过时的测试脚本和配置文件

* add

* 重构: 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py 并更新所有导入引用

- 使用 git mv 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py
- 更新所有相关文件中的导入引用
- 保持代码功能不变,仅改善命名一致性
- 测试确认所有导入正常工作

* 修复: 在 core/__init__.py 中添加 LaiYuLiquidBackend 导出

- 添加 LaiYuLiquidBackend 到导入列表
- 添加 LaiYuLiquidBackend 到 __all__ 导出列表
- 确保所有主要类都可以正确导入

* 修复大小写文件夹名字

* 电池装配工站二次开发教程(带目录)上传至dev (#94)

* 电池装配工站二次开发教程

* Update intro.md

* 物料教程

* 更新物料教程,json格式注释

* Update prcxi driver & fix transfer_liquid mix_times (#90)

* Update prcxi driver & fix transfer_liquid mix_times

* fix: correct mix_times type

* Update liquid_handler registry

* test: prcxi.py

* Update registry from pr

* fix ony-key script not exist

* clean files

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Guangxin Zhang <guangxin.zhang.bio@gmail.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>
Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: LccLink <1951855008@qq.com>
Co-authored-by: lixinyu1011 <61094742+lixinyu1011@users.noreply.github.com>
Co-authored-by: shiyubo0410 <shiyubo@dp.tech>

* fix startup env check.
add auto install during one-key installation

* Try fix one-key build on linux

* Complete all one key installation

* fix: rename schema field to resource_schema with serialization and validation aliases (#104)

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>

* Fix one-key installation build

Install conda-pack before pack command

Add conda-pack to base when building one-key installer

Fix param error when using mamba run

Try fix one-key build on linux

* Fix conda pack on windows

* add plr_to_bioyond, and refactor bioyond stations

* modify default config

* Fix one-key installation build for windows

* Fix workstation startup
Update registry

* Fix/resource UUID and doc fix (#109)

* Fix ResourceTreeSet load error

* Raise error when using unsupported type to create ResourceTreeSet

* Fix children key error

* Fix children key error

* Fix workstation resource not tracking

* Fix workstation deck & children resource dupe

* Fix workstation deck & children resource dupe

* Fix multiple resource error

* Fix resource tree update

* Fix resource tree update

* Force confirm uuid

* Tip more error log

* Refactor Bioyond workstation and experiment workflow (#105)

Refactored the Bioyond workstation classes to improve parameter handling and workflow management. Updated experiment.py to use BioyondReactionStation with deck and material mappings, and enhanced workflow step parameter mapping and execution logic. Adjusted JSON experiment configs, improved workflow sequence handling, and added UUID assignment to PLR materials. Removed unused station_config and material cache logic, and added detailed docstrings and debug output for workflow methods.

* Fix resource get.
Fix resource parent not found.
Mapping uuid for all resources.

* mount parent uuid

* Add logging configuration based on BasicConfig in main function

* fix workstation node error

* fix workstation node error

* Update boot example

* temp fix for resource get

* temp fix for resource get

* provide error info when cant find plr type

* pack repo info

* fix to plr type error

* fix to plr type error

* Update regular container method

* support no size init

* fix comprehensive_station.json

* fix comprehensive_station.json

* fix type conversion

* fix state loading for regular container

* Update deploy-docs.yml

* Update deploy-docs.yml

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>

* Close #107
Update doc url.

* Fix/update resource (#112)

* cancel upload_registry

* Refactor Bioyond workstation and experiment workflow -fix (#111)

* refactor(bioyond_studio): 优化材料缓存加载和参数验证逻辑

改进材料缓存加载逻辑以支持多种材料类型和详细材料处理
更新工作流参数验证中的字段名从key/value改为Key/DisplayValue
移除未使用的merge_workflow_with_parameters方法
添加get_station_info方法获取工作站基础信息
清理实验文件中的注释代码和更新导入路径

* fix: 修复资源移除时的父资源检查问题

在BaseROS2DeviceNode中,移除资源前添加对父资源是否为None的检查,避免空指针异常
同时更新Bottle和BottleCarrier类以支持**kwargs参数
修正测试文件中Liquid_feeding_beaker的大小写拼写错误

* correct return message

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>

* fix resource_get in action

* fix(reaction_station): 清空工作流序列和参数避免重复执行 (#113)

在创建任务后清空工作流序列和参数,防止下次执行时累积重复

* Update create_resource device_id

* Update ResourceTracker

add more enumeration in POSE

fix converter in resource_tracker

* Update graphio together with workstation design.

fix(reaction_station): 为步骤参数添加Value字段传个BY后端

fix(bioyond/warehouses): 修正仓库尺寸和物品排列参数

调整仓库的x轴和z轴物品数量以及物品尺寸参数,使其符合4x1x4的规格要求

fix warehouse serialize/deserialize

fix bioyond converter

fix itemized_carrier.unassign_child_resource

allow not-loaded MSG in registry

add layout serializer & converter

warehouseuse A1-D4; add warehouse layout

fix(graphio): 修正bioyond到plr资源转换中的坐标计算错误

Fix resource assignment and type mapping issues

Corrects resource assignment in ItemizedCarrier by using the correct spot key from _ordering. Updates graphio to use 'typeName' instead of 'name' for type mapping in resource_bioyond_to_plr. Renames DummyWorkstation to BioyondWorkstation in workstation_http_service for clarity.

* Update workstation & bioyond example

Refine descriptions in Bioyond reaction station YAML

Updated and clarified field and operation descriptions in the reaction_station_bioyond.yaml file for improved accuracy and consistency. Changes include more precise terminology, clearer parameter explanations, and standardized formatting for operation schemas.

refactor(workstation): 更新反应站参数描述并添加分液站配置文件

修正反应站方法参数描述,使其更准确清晰
添加bioyond_dispensing_station.yaml配置文件

add create_workflow script and test

add invisible_slots to carriers

fix(warehouses): 修正bioyond_warehouse_1x4x4仓库的尺寸参数

调整仓库的num_items_x和num_items_z值以匹配实际布局,并更新物品尺寸参数

save resource get data. allow empty value for layout and cross_section_type

More decks&plates support for bioyond (#115)

refactor(registry): 重构反应站设备配置,简化并更新操作命令

移除旧的自动操作命令,新增针对具体化学操作的命令配置
更新模块路径和配置结构,优化参数定义和描述

fix(dispensing_station): 修正物料信息查询方法调用

将直接调用material_id_query改为通过hardware_interface调用,以符合接口设计规范

* PRCXI Update

修改prcxi连线

prcxi样例图

Create example_prcxi.json

* Update resource extra & uuid.

use ordering to convert identifier to idx

convert identifier to site idx

correct extra key

update extra before transfer

fix multiple instance error

add resource_tree_transfer func

fox itemrized carrier assign child resource

support internal device material transfer

remove extra key

use same callback group

support material extra

support material extra
support update_resource_site in extra

* Update workstation.

modify workstation_architecture docs

bioyond_HR (#133)

* feat: Enhance Bioyond synchronization and resource management

- Implemented synchronization for all material types (consumables, samples, reagents) from Bioyond, logging detailed information for each type.
- Improved error handling and logging during synchronization processes.
- Added functionality to save Bioyond material IDs in UniLab resources for future updates.
- Enhanced the `sync_to_external` method to handle material movements correctly, including querying and creating materials in Bioyond.
- Updated warehouse configurations to support new storage types and improved layout for better resource management.
- Introduced new resource types such as reactors and tip boxes, with detailed specifications.
- Modified warehouse factory to support column offsets for naming conventions (e.g., A05-D08).
- Improved resource tracking by merging extra attributes instead of overwriting them.
- Added a new method for updating resources in Bioyond, ensuring better synchronization of resource changes.

* feat: 添加TipBox和Reactor的配置到bottles.yaml

* fix: 修复液体投料方法中的volume参数处理逻辑

修复solid_feeding_vials方法中的volume参数处理逻辑,优化solvents参数的使用条件

更新液体投料方法,支持通过溶剂信息自动计算体积,添加solvents参数并更新文档描述

Add batch creation methods for vial and solution tasks

添加批量创建90%10%小瓶投料任务和二胺溶液配置任务的功能,更新相关参数和默认值

* 封膜仪、撕膜仪、耗材站接口

* 添加Raman和xrd相关代码

* Resource update & asyncio fix

correct bioyond config

prcxi example

fix append_resource

fix regularcontainer

fix cancel error

fix resource_get param

fix json dumps

support name change during materials change

enable slave mode

change uuid logger to trace level

correct remove_resource stats

disable slave connect websocket

adjust with_children param

modify devices to use correct executor (sleep, create_task)

support sleep and create_task in node

fix run async execution error

* bump version to 0.10.9

update registry

* PRCXI Reset Error Correction (#166)

* change 9320 desk row number to 4

* Updated 9320 host address

* Updated 9320 host address

* Add **kwargs in classes: PRCXI9300Deck and PRCXI9300Container

* Removed all sample_id in prcxi_9320.json to avoid KeyError

* 9320 machine testing settings

* Typo

* Rewrite setup logic to clear error code

* 初始化 step_mode 属性

* 1114物料手册定义教程byxinyu (#165)

* 宜宾奔耀工站deck前端by_Xinyu

* 构建物料教程byxinyu

* 1114物料手册定义教程

* 3d sim (#97)

* 修改lh的json启动

* 修改lh的json启动

* 修改backend,做成sim的通用backend

* 修改yaml的地址,3D模型适配网页生产环境

* 添加laiyu硬件连接

* 修改移液枪的状态判断方法,

修改移液枪的状态判断方法,
添加三轴的表定点与零点之间的转换
添加三轴真实移动的backend

* 修改laiyu移液站

简化移动方法,
取消软件限制位置,
修改当值使用Z轴时也需要重新复位Z轴的问题

* 更新lh以及laiyu workshop

1,现在可以直接通过修改backend,适配其他的移液站,主类依旧使用LiquidHandler,不用重新编写

2,修改枪头判断标准,使用枪头自身判断而不是类的判断,

3,将归零参数用毫米计算,方便手动调整,

4,修改归零方式,上电使用机械归零,确定机械零点,手动归零设置工作区域零点方便计算,二者互不干涉

* 修改枪头动作

* 修改虚拟仿真方法

---------

Co-authored-by: zhangshixiang <@zhangshixiang>
Co-authored-by: Junhan Chang <changjh@dp.tech>

* 标准化opcua设备接入unilab (#78)

* 初始提交,只保留工作区当前状态

* remove redundant arm_slider meshes

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>

* add new laiyu liquid driver, yaml and json files (#164)

* HR物料同步,前端展示位置修复 (#135)

* 更新Bioyond工作站配置,添加新的物料类型映射和载架定义,优化物料查询逻辑

* 添加Bioyond实验配置文件,定义物料类型映射和设备配置

* 更新bioyond_warehouse_reagent_stack方法,修正试剂堆栈尺寸和布局描述

* 更新Bioyond实验配置,修正物料类型映射,优化设备配置

* 更新Bioyond资源同步逻辑,优化物料入库流程,增强错误处理和日志记录

* 更新Bioyond资源,添加配液站和反应站专用载架,优化仓库工厂函数的排序方式

* 更新Bioyond资源,添加配液站和反应站相关载架,优化试剂瓶和样品瓶配置

* 更新Bioyond实验配置,修正试剂瓶载架ID,确保与设备匹配

* 更新Bioyond资源,移除反应站单烧杯载架,添加反应站单烧瓶载架分类

* Refactor Bioyond resource synchronization and update bottle carrier definitions

- Removed traceback printing in error handling for Bioyond synchronization.
- Enhanced logging for existing Bioyond material ID usage during synchronization.
- Added new bottle carrier definitions for single flask and updated existing ones.
- Refactored dispensing station and reaction station bottle definitions for clarity and consistency.
- Improved resource mapping and error handling in graphio for Bioyond resource conversion.
- Introduced layout parameter in warehouse factory for better warehouse configuration.

* 更新Bioyond仓库工厂,添加排序方式支持,优化坐标计算逻辑

* 更新Bioyond载架和甲板配置,调整样品板尺寸和仓库坐标

* 更新Bioyond资源同步,增强占用位置日志信息,修正坐标转换逻辑

* 更新Bioyond反应站和分配站配置,调整材料类型映射和ID,移除不必要的项

* support name change during materials change

* fix json dumps

* correct tip

* 优化调度器API路径,更新相关方法描述

* 更新 BIOYOND 载架相关文档,调整 API 以支持自带试剂瓶的载架类型,修复资源获取时的子物料处理逻辑

* 实现资源删除时的同步处理,优化出库操作逻辑

* 修复 ItemizedCarrier 中的可见性逻辑

* 保存 Bioyond 原始信息到 unilabos_extra,以便出库时查询

* 根据 resource.capacity 判断是试剂瓶(载架)还是多瓶载架,走不同的奔曜转换

* Fix bioyond bottle_carriers ordering

* 优化 Bioyond 物料同步逻辑,增强坐标解析和位置更新处理

* disable slave connect websocket

* correct remove_resource stats

* change uuid logger to trace level

* enable slave mode

* refactor(bioyond): 统一资源命名并优化物料同步逻辑

- 将DispensingStation和ReactionStation资源统一为PolymerStation命名
- 优化物料同步逻辑,支持耗材类型(typeMode=0)的查询
- 添加物料默认参数配置功能
- 调整仓库坐标布局
- 清理废弃资源定义

* feat(warehouses): 为仓库函数添加col_offset和layout参数

* refactor: 更新实验配置中的物料类型映射命名

将DispensingStation和ReactionStation的物料类型映射统一更名为PolymerStation,保持命名一致性

* fix: 更新实验配置中的载体名称从6VialCarrier到6StockCarrier

* feat(bioyond): 实现物料创建与入库分离逻辑

将物料同步流程拆分为两个独立阶段:transfer阶段只创建物料,add阶段执行入库
简化状态检查接口,仅返回连接状态

* fix(reaction_station): 修正液体进料烧杯体积单位并增强返回结果

将液体进料烧杯的体积单位从μL改为g以匹配实际使用场景
在返回结果中添加merged_workflow和order_params字段,提供更完整的工作流信息

* feat(dispensing_station): 在任务创建返回结果中添加order_params信息

在create_order方法返回结果中增加order_params字段,以便调用方获取完整的任务参数

* fix(dispensing_station): 修改90%物料分配逻辑从分成3份改为直接使用

原逻辑将主称固体平均分成3份作为90%物料,现改为直接使用main_portion

* feat(bioyond): 添加任务编码和任务ID的输出,支持批量任务创建后的状态监控

* refactor(registry): 简化设备配置中的任务结果处理逻辑

将多个单独的任务编码和ID字段合并为统一的return_info字段
更新相关描述以反映新的数据结构

* feat(工作站): 添加HTTP报送服务和任务完成状态跟踪

- 在graphio.py中添加API必需字段
- 实现工作站HTTP服务启动和停止逻辑
- 添加任务完成状态跟踪字典和等待方法
- 重写任务完成报送处理方法记录状态
- 支持批量任务完成等待和报告获取

* refactor(dispensing_station): 移除wait_for_order_completion_and_get_report功能

该功能已被wait_for_multiple_orders_and_get_reports替代,简化代码结构

* fix: 更新任务报告API错误

* fix(workstation_http_service): 修复状态查询中device_id获取逻辑

处理状态查询时安全获取device_id,避免因属性不存在导致的异常

* fix(bioyond_studio): 改进物料入库失败时的错误处理和日志记录

在物料入库API调用失败时,添加更详细的错误信息打印
同时修正station.py中对空响应和失败情况的判断逻辑

* refactor(bioyond): 优化瓶架载体的分配逻辑和注释说明

重构瓶架载体的分配逻辑,使用嵌套循环替代硬编码索引分配
添加更详细的坐标映射说明,明确PLR与Bioyond坐标的对应关系

* fix(bioyond_rpc): 修复物料入库成功时无data字段返回空的问题

当API返回成功但无data字段时,返回包含success标识的字典而非空字典

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>
Co-authored-by: Junhan Chang <changjh@dp.tech>

* nmr

* Update devices

* bump version to 0.10.10

* Update repo files.

* Add get_resource_with_dir & get_resource method

* fix camera & workstation & warehouse & reaction station driver

* update docs, test examples
fix liquid_handler init bug

* bump version to 0.10.11

* Add startup_json_path, disable_browser, port config

* Update oss config

* feat(bioyond_studio): 添加项目API接口支持及优化物料管理功能

添加通用项目API接口方法(_post_project_api, _delete_project_api)用于与LIMS系统交互
实现compute_experiment_design方法用于实验设计计算
新增brief_step_parameters等订单相关接口方法
优化物料转移逻辑,增加异步任务处理
扩展BioyondV1RPC类,添加批量物料操作、订单状态管理等功能

* feat(bioyond): 添加测量小瓶仓库和更新仓库工厂函数参数

* Support unilabos_samples key

* add session_id and normal_exit

* Add result schema and add TypedDict conversion.

* Fix port error

* Add backend api and update doc

* Add get_regular_container func

* Add get_regular_container func

* Transfer_liquid (#176)

* change 9320 desk row number to 4

* Updated 9320 host address

* Updated 9320 host address

* Add **kwargs in classes: PRCXI9300Deck and PRCXI9300Container

* Removed all sample_id in prcxi_9320.json to avoid KeyError

* 9320 machine testing settings

* Typo

* Typo in base_device_node.py

* Enhance liquid handling functionality by adding support for multiple transfer modes (one-to-many, one-to-one, many-to-one) and improving parameter validation. Default channel usage is set when not specified. Adjusted mixing logic to ensure it only occurs when valid conditions are met. Updated documentation for clarity.

* Auto dump logs, fix workstation input schema

* Fix startup with remote resource error

Resource dict fully change to "pose" key

Update oss link

Reduce pylabrobot conversion warning & force enable log dump.

更新 logo 图片

* signal when host node is ready

* fix ros2 future

print all logs to file
fix resource dict dump error

* update version to 0.10.12

* 修改sample_uuid的返回值

* 修改pose标签设定机制

* 添加 aspiate函数返回值

* 返回dispense后的sample_uuid

* 添加self.pending_liquids_dict的重置方法

* 修改prcxi的json文件,解决trach错误问题

* 修改prcxijson,防止PlateT4的硬件错误

* 对laiyu移液站进行部分修改,取消多次初始化的问题

* 修改根据新的物料格式,修改可视化

* 添加切换枪头方法,添加mock振荡与加热方法

* 夹爪添加

* 删除多余的laiyu部分

* 云端可启动夹爪

* Delete __init__.py

* Enhance PRCXI9300 classes with new Container and TipRack implementations, improving state management and initialization logic. Update JSON configuration to reflect type changes for containers and plates.

* 修改上传数据

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Guangxin Zhang <guangxin.zhang.bio@gmail.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>
Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: LccLink <1951855008@qq.com>
Co-authored-by: lixinyu1011 <61094742+lixinyu1011@users.noreply.github.com>
Co-authored-by: shiyubo0410 <shiyubo@dp.tech>
Co-authored-by: hh.(SII) <103566763+Mile-Away@users.noreply.github.com>
Co-authored-by: Xianwei Qi <qxw@stu.pku.edu.cn>
Co-authored-by: WenzheG <wenzheguo32@gmail.com>
Co-authored-by: Harry Liu <113173203+ALITTLELZ@users.noreply.github.com>
Co-authored-by: q434343 <73513873+q434343@users.noreply.github.com>
Co-authored-by: tt <166512503+tt11142023@users.noreply.github.com>
Co-authored-by: xyc <49015816+xiaoyu10031@users.noreply.github.com>
Co-authored-by: zhangshixiang <@zhangshixiang>
Co-authored-by: zhangshixiang <554662886@qq.com>
Co-authored-by: ALITTLELZ <l_LZlz@163.com>

Add topic config

add camera driver (#191)

* add camera driver

* add init.py file to cameraSII driver

增强新威电池测试系统 OSS 上传功能 / Enhanced Neware Battery Test System OSS Upload (#196)

* feat: neware-oss-upload-enhancement

* feat(neware): enhance OSS upload with metadata and workflow handles

Add post process station and related resources (#195)

* Add post process station and related resources

- Created JSON configuration for post_process_station and its child post_process_deck.
- Added YAML definitions for post_process_station, bottle carriers, bottles, and deck resources.
- Implemented Python classes for bottle carriers, bottles, decks, and warehouses to manage resources in the post process.
- Established a factory method for creating warehouses with customizable dimensions and layouts.
- Defined the structure and behavior of the post_process_deck and its associated warehouses.

* feat(post_process): add post_process_station and related warehouse functionality

- Introduced post_process_station.json to define the post-processing station structure.
- Implemented post_process_warehouse.py to create warehouse configurations with customizable layouts.
- Added warehouses.py for specific warehouse configurations (4x3x1).
- Updated post_process_station.yaml to reflect new module paths for OpcUaClient.
- Refactored bottle carriers and bottles YAML files to point to the new module paths.
- Adjusted deck.yaml to align with the new organizational structure for post_process_deck.

prcxi resource (#202)

* prcxi resource

* prcxi_resource

* Fix upload error not showing.
Support str type category.

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>

Fix upload error not showing.
Support str type category.

feat: introduce `wait_time` command and configurable device communication timeout.

feat: Add `SyringePump` (SY-03B) driver with unified serial/TCP transport for `chinwe` device, including registry and test configurations.
2025-12-26 03:36:48 +08:00
Xuwznln
13a6795657 Update organic syn station. 2025-12-15 02:34:36 +08:00
Xianwei Qi
53219d8b04 Update docs
update "laiyu" missing init file.

fix "laiyu" missing init file.

fix "🐛 fix"

🐛 fix: config file is overwrited by default args even if not be set.

mix

修改了mix,仿真流程报错问题
2025-12-14 13:13:21 +08:00
Xuwznln
b1cdef9185 update version to 0.10.12 2025-12-04 18:47:16 +08:00
Xuwznln
9854ed8c9c fix ros2 future
print all logs to file
fix resource dict dump error
2025-12-04 18:46:37 +08:00
Xuwznln
52544a2c69 signal when host node is ready 2025-12-02 12:00:26 +08:00
ZiWei
5ce433e235 Fix startup with remote resource error
Resource dict fully change to "pose" key

Update oss link

Reduce pylabrobot conversion warning & force enable log dump.

更新 logo 图片
2025-12-02 11:51:01 +08:00
Xuwznln
c7c14d2332 Auto dump logs, fix workstation input schema 2025-11-27 14:24:40 +08:00
Harry Liu
6fdd482649 Transfer_liquid (#176)
* change 9320 desk row number to 4

* Updated 9320 host address

* Updated 9320 host address

* Add **kwargs in classes: PRCXI9300Deck and PRCXI9300Container

* Removed all sample_id in prcxi_9320.json to avoid KeyError

* 9320 machine testing settings

* Typo

* Typo in base_device_node.py

* Enhance liquid handling functionality by adding support for multiple transfer modes (one-to-many, one-to-one, many-to-one) and improving parameter validation. Default channel usage is set when not specified. Adjusted mixing logic to ensure it only occurs when valid conditions are met. Updated documentation for clarity.
2025-11-27 13:49:04 +08:00
Xuwznln
d390236318 Add get_regular_container func 2025-11-27 13:47:12 +08:00
Xuwznln
ed8ee29732 Add get_regular_container func 2025-11-27 13:46:40 +08:00
Xuwznln
ffc583e9d5 Add backend api and update doc 2025-11-26 19:17:46 +08:00
Xuwznln
f1ad0c9c96 Fix port error 2025-11-25 15:19:15 +08:00
Xuwznln
8fa3407649 Add result schema and add TypedDict conversion. 2025-11-25 15:16:27 +08:00
Xuwznln
d3282822fc add session_id and normal_exit 2025-11-20 22:43:24 +08:00
Xuwznln
554bcade24 Support unilabos_samples key 2025-11-19 15:53:59 +08:00
ZiWei
a662c75de1 feat(bioyond): 添加测量小瓶仓库和更新仓库工厂函数参数 2025-11-19 14:26:12 +08:00
ZiWei
931614fe64 feat(bioyond_studio): 添加项目API接口支持及优化物料管理功能
添加通用项目API接口方法(_post_project_api, _delete_project_api)用于与LIMS系统交互
实现compute_experiment_design方法用于实验设计计算
新增brief_step_parameters等订单相关接口方法
优化物料转移逻辑,增加异步任务处理
扩展BioyondV1RPC类,添加批量物料操作、订单状态管理等功能
2025-11-19 14:26:10 +08:00
Xuwznln
d39662f65f Update oss config 2025-11-19 14:22:03 +08:00
Xuwznln
acf5fdebf8 Add startup_json_path, disable_browser, port config 2025-11-18 18:59:39 +08:00
Xuwznln
7f7b1c13c0 bump version to 0.10.11 2025-11-18 18:47:26 +08:00
Xuwznln
75f09034ff update docs, test examples
fix liquid_handler init bug
2025-11-18 18:42:27 +08:00
ZiWei
549a50220b fix camera & workstation & warehouse & reaction station driver 2025-11-18 18:41:37 +08:00
Xuwznln
4189a2cfbe Add get_resource_with_dir & get_resource method 2025-11-15 22:50:30 +08:00
Xuwznln
48895a9bb1 Update repo files. 2025-11-15 03:15:44 +08:00
Xuwznln
891f126ed6 bump version to 0.10.10 2025-11-15 03:11:37 +08:00
Xuwznln
4d3475a849 Update devices 2025-11-15 03:11:36 +08:00
WenzheG
b475db66df nmr 2025-11-15 03:11:35 +08:00
ZiWei
a625a86e3e HR物料同步,前端展示位置修复 (#135)
* 更新Bioyond工作站配置,添加新的物料类型映射和载架定义,优化物料查询逻辑

* 添加Bioyond实验配置文件,定义物料类型映射和设备配置

* 更新bioyond_warehouse_reagent_stack方法,修正试剂堆栈尺寸和布局描述

* 更新Bioyond实验配置,修正物料类型映射,优化设备配置

* 更新Bioyond资源同步逻辑,优化物料入库流程,增强错误处理和日志记录

* 更新Bioyond资源,添加配液站和反应站专用载架,优化仓库工厂函数的排序方式

* 更新Bioyond资源,添加配液站和反应站相关载架,优化试剂瓶和样品瓶配置

* 更新Bioyond实验配置,修正试剂瓶载架ID,确保与设备匹配

* 更新Bioyond资源,移除反应站单烧杯载架,添加反应站单烧瓶载架分类

* Refactor Bioyond resource synchronization and update bottle carrier definitions

- Removed traceback printing in error handling for Bioyond synchronization.
- Enhanced logging for existing Bioyond material ID usage during synchronization.
- Added new bottle carrier definitions for single flask and updated existing ones.
- Refactored dispensing station and reaction station bottle definitions for clarity and consistency.
- Improved resource mapping and error handling in graphio for Bioyond resource conversion.
- Introduced layout parameter in warehouse factory for better warehouse configuration.

* 更新Bioyond仓库工厂,添加排序方式支持,优化坐标计算逻辑

* 更新Bioyond载架和甲板配置,调整样品板尺寸和仓库坐标

* 更新Bioyond资源同步,增强占用位置日志信息,修正坐标转换逻辑

* 更新Bioyond反应站和分配站配置,调整材料类型映射和ID,移除不必要的项

* support name change during materials change

* fix json dumps

* correct tip

* 优化调度器API路径,更新相关方法描述

* 更新 BIOYOND 载架相关文档,调整 API 以支持自带试剂瓶的载架类型,修复资源获取时的子物料处理逻辑

* 实现资源删除时的同步处理,优化出库操作逻辑

* 修复 ItemizedCarrier 中的可见性逻辑

* 保存 Bioyond 原始信息到 unilabos_extra,以便出库时查询

* 根据 resource.capacity 判断是试剂瓶(载架)还是多瓶载架,走不同的奔曜转换

* Fix bioyond bottle_carriers ordering

* 优化 Bioyond 物料同步逻辑,增强坐标解析和位置更新处理

* disable slave connect websocket

* correct remove_resource stats

* change uuid logger to trace level

* enable slave mode

* refactor(bioyond): 统一资源命名并优化物料同步逻辑

- 将DispensingStation和ReactionStation资源统一为PolymerStation命名
- 优化物料同步逻辑,支持耗材类型(typeMode=0)的查询
- 添加物料默认参数配置功能
- 调整仓库坐标布局
- 清理废弃资源定义

* feat(warehouses): 为仓库函数添加col_offset和layout参数

* refactor: 更新实验配置中的物料类型映射命名

将DispensingStation和ReactionStation的物料类型映射统一更名为PolymerStation,保持命名一致性

* fix: 更新实验配置中的载体名称从6VialCarrier到6StockCarrier

* feat(bioyond): 实现物料创建与入库分离逻辑

将物料同步流程拆分为两个独立阶段:transfer阶段只创建物料,add阶段执行入库
简化状态检查接口,仅返回连接状态

* fix(reaction_station): 修正液体进料烧杯体积单位并增强返回结果

将液体进料烧杯的体积单位从μL改为g以匹配实际使用场景
在返回结果中添加merged_workflow和order_params字段,提供更完整的工作流信息

* feat(dispensing_station): 在任务创建返回结果中添加order_params信息

在create_order方法返回结果中增加order_params字段,以便调用方获取完整的任务参数

* fix(dispensing_station): 修改90%物料分配逻辑从分成3份改为直接使用

原逻辑将主称固体平均分成3份作为90%物料,现改为直接使用main_portion

* feat(bioyond): 添加任务编码和任务ID的输出,支持批量任务创建后的状态监控

* refactor(registry): 简化设备配置中的任务结果处理逻辑

将多个单独的任务编码和ID字段合并为统一的return_info字段
更新相关描述以反映新的数据结构

* feat(工作站): 添加HTTP报送服务和任务完成状态跟踪

- 在graphio.py中添加API必需字段
- 实现工作站HTTP服务启动和停止逻辑
- 添加任务完成状态跟踪字典和等待方法
- 重写任务完成报送处理方法记录状态
- 支持批量任务完成等待和报告获取

* refactor(dispensing_station): 移除wait_for_order_completion_and_get_report功能

该功能已被wait_for_multiple_orders_and_get_reports替代,简化代码结构

* fix: 更新任务报告API错误

* fix(workstation_http_service): 修复状态查询中device_id获取逻辑

处理状态查询时安全获取device_id,避免因属性不存在导致的异常

* fix(bioyond_studio): 改进物料入库失败时的错误处理和日志记录

在物料入库API调用失败时,添加更详细的错误信息打印
同时修正station.py中对空响应和失败情况的判断逻辑

* refactor(bioyond): 优化瓶架载体的分配逻辑和注释说明

重构瓶架载体的分配逻辑,使用嵌套循环替代硬编码索引分配
添加更详细的坐标映射说明,明确PLR与Bioyond坐标的对应关系

* fix(bioyond_rpc): 修复物料入库成功时无data字段返回空的问题

当API返回成功但无data字段时,返回包含success标识的字典而非空字典

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>
Co-authored-by: Junhan Chang <changjh@dp.tech>
2025-11-15 03:11:34 +08:00
xyc
37e0f1037c add new laiyu liquid driver, yaml and json files (#164) 2025-11-15 03:11:33 +08:00
tt
a242253145 标准化opcua设备接入unilab (#78)
* 初始提交,只保留工作区当前状态

* remove redundant arm_slider meshes

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
2025-11-15 03:11:31 +08:00
q434343
448e0074b7 3d sim (#97)
* 修改lh的json启动

* 修改lh的json启动

* 修改backend,做成sim的通用backend

* 修改yaml的地址,3D模型适配网页生产环境

* 添加laiyu硬件连接

* 修改移液枪的状态判断方法,

修改移液枪的状态判断方法,
添加三轴的表定点与零点之间的转换
添加三轴真实移动的backend

* 修改laiyu移液站

简化移动方法,
取消软件限制位置,
修改当值使用Z轴时也需要重新复位Z轴的问题

* 更新lh以及laiyu workshop

1,现在可以直接通过修改backend,适配其他的移液站,主类依旧使用LiquidHandler,不用重新编写

2,修改枪头判断标准,使用枪头自身判断而不是类的判断,

3,将归零参数用毫米计算,方便手动调整,

4,修改归零方式,上电使用机械归零,确定机械零点,手动归零设置工作区域零点方便计算,二者互不干涉

* 修改枪头动作

* 修改虚拟仿真方法

---------

Co-authored-by: zhangshixiang <@zhangshixiang>
Co-authored-by: Junhan Chang <changjh@dp.tech>
2025-11-15 03:11:30 +08:00
lixinyu1011
304827fc8d 1114物料手册定义教程byxinyu (#165)
* 宜宾奔耀工站deck前端by_Xinyu

* 构建物料教程byxinyu

* 1114物料手册定义教程
2025-11-15 03:11:29 +08:00
Harry Liu
872b3d781f PRCXI Reset Error Correction (#166)
* change 9320 desk row number to 4

* Updated 9320 host address

* Updated 9320 host address

* Add **kwargs in classes: PRCXI9300Deck and PRCXI9300Container

* Removed all sample_id in prcxi_9320.json to avoid KeyError

* 9320 machine testing settings

* Typo

* Rewrite setup logic to clear error code

* 初始化 step_mode 属性
2025-11-15 03:11:29 +08:00
Xuwznln
813400f2b4 bump version to 0.10.9
update registry
2025-11-15 02:45:30 +08:00
Xuwznln
b6dfe2b944 Resource update & asyncio fix
correct bioyond config

prcxi example

fix append_resource

fix regularcontainer

fix cancel error

fix resource_get param

fix json dumps

support name change during materials change

enable slave mode

change uuid logger to trace level

correct remove_resource stats

disable slave connect websocket

adjust with_children param

modify devices to use correct executor (sleep, create_task)

support sleep and create_task in node

fix run async execution error
2025-11-15 02:45:12 +08:00
WenzheG
8807865649 添加Raman和xrd相关代码 2025-11-15 02:44:03 +08:00
Guangxin Zhang
5fc7eb7586 封膜仪、撕膜仪、耗材站接口 2025-11-15 02:44:02 +08:00
ZiWei
9bd72b48e1 Update workstation.
modify workstation_architecture docs

bioyond_HR (#133)

* feat: Enhance Bioyond synchronization and resource management

- Implemented synchronization for all material types (consumables, samples, reagents) from Bioyond, logging detailed information for each type.
- Improved error handling and logging during synchronization processes.
- Added functionality to save Bioyond material IDs in UniLab resources for future updates.
- Enhanced the `sync_to_external` method to handle material movements correctly, including querying and creating materials in Bioyond.
- Updated warehouse configurations to support new storage types and improved layout for better resource management.
- Introduced new resource types such as reactors and tip boxes, with detailed specifications.
- Modified warehouse factory to support column offsets for naming conventions (e.g., A05-D08).
- Improved resource tracking by merging extra attributes instead of overwriting them.
- Added a new method for updating resources in Bioyond, ensuring better synchronization of resource changes.

* feat: 添加TipBox和Reactor的配置到bottles.yaml

* fix: 修复液体投料方法中的volume参数处理逻辑

修复solid_feeding_vials方法中的volume参数处理逻辑,优化solvents参数的使用条件

更新液体投料方法,支持通过溶剂信息自动计算体积,添加solvents参数并更新文档描述

Add batch creation methods for vial and solution tasks

添加批量创建90%10%小瓶投料任务和二胺溶液配置任务的功能,更新相关参数和默认值
2025-11-15 02:43:50 +08:00
Xuwznln
42b78ab4c1 Update resource extra & uuid.
use ordering to convert identifier to idx

convert identifier to site idx

correct extra key

update extra before transfer

fix multiple instance error

add resource_tree_transfer func

fox itemrized carrier assign child resource

support internal device material transfer

remove extra key

use same callback group

support material extra

support material extra
support update_resource_site in extra
2025-11-15 02:43:13 +08:00
Xianwei Qi
9645609a05 PRCXI Update
修改prcxi连线

prcxi样例图

Create example_prcxi.json
2025-11-15 02:41:30 +08:00
ZiWei
a2a827d7ac Update workstation & bioyond example
Refine descriptions in Bioyond reaction station YAML

Updated and clarified field and operation descriptions in the reaction_station_bioyond.yaml file for improved accuracy and consistency. Changes include more precise terminology, clearer parameter explanations, and standardized formatting for operation schemas.

refactor(workstation): 更新反应站参数描述并添加分液站配置文件

修正反应站方法参数描述,使其更准确清晰
添加bioyond_dispensing_station.yaml配置文件

add create_workflow script and test

add invisible_slots to carriers

fix(warehouses): 修正bioyond_warehouse_1x4x4仓库的尺寸参数

调整仓库的num_items_x和num_items_z值以匹配实际布局,并更新物品尺寸参数

save resource get data. allow empty value for layout and cross_section_type

More decks&plates support for bioyond (#115)

refactor(registry): 重构反应站设备配置,简化并更新操作命令

移除旧的自动操作命令,新增针对具体化学操作的命令配置
更新模块路径和配置结构,优化参数定义和描述

fix(dispensing_station): 修正物料信息查询方法调用

将直接调用material_id_query改为通过hardware_interface调用,以符合接口设计规范
2025-11-15 02:40:54 +08:00
ZiWei
bb3ca645a4 Update graphio together with workstation design.
fix(reaction_station): 为步骤参数添加Value字段传个BY后端

fix(bioyond/warehouses): 修正仓库尺寸和物品排列参数

调整仓库的x轴和z轴物品数量以及物品尺寸参数,使其符合4x1x4的规格要求

fix warehouse serialize/deserialize

fix bioyond converter

fix itemized_carrier.unassign_child_resource

allow not-loaded MSG in registry

add layout serializer & converter

warehouseuse A1-D4; add warehouse layout

fix(graphio): 修正bioyond到plr资源转换中的坐标计算错误

Fix resource assignment and type mapping issues

Corrects resource assignment in ItemizedCarrier by using the correct spot key from _ordering. Updates graphio to use 'typeName' instead of 'name' for type mapping in resource_bioyond_to_plr. Renames DummyWorkstation to BioyondWorkstation in workstation_http_service for clarity.
2025-11-15 02:39:01 +08:00
Junhan Chang
37ee43d19a Update ResourceTracker
add more enumeration in POSE

fix converter in resource_tracker
2025-11-15 02:38:01 +08:00
Xuwznln
bc30f23e34 Update create_resource device_id 2025-10-20 21:45:20 +08:00
ZiWei
166d84afe1 fix(reaction_station): 清空工作流序列和参数避免重复执行 (#113)
在创建任务后清空工作流序列和参数,防止下次执行时累积重复
2025-10-17 13:44:36 +08:00
Junhan Chang
1b43c53015 fix resource_get in action 2025-10-17 13:44:35 +08:00
Xuwznln
d4415f5a35 Fix/update resource (#112)
* cancel upload_registry

* Refactor Bioyond workstation and experiment workflow -fix (#111)

* refactor(bioyond_studio): 优化材料缓存加载和参数验证逻辑

改进材料缓存加载逻辑以支持多种材料类型和详细材料处理
更新工作流参数验证中的字段名从key/value改为Key/DisplayValue
移除未使用的merge_workflow_with_parameters方法
添加get_station_info方法获取工作站基础信息
清理实验文件中的注释代码和更新导入路径

* fix: 修复资源移除时的父资源检查问题

在BaseROS2DeviceNode中,移除资源前添加对父资源是否为None的检查,避免空指针异常
同时更新Bottle和BottleCarrier类以支持**kwargs参数
修正测试文件中Liquid_feeding_beaker的大小写拼写错误

* correct return message

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
2025-10-17 03:08:15 +08:00
Xuwznln
0260cbbedb Close #107
Update doc url.
2025-10-16 17:26:45 +08:00
Xuwznln
7c440d10ab Fix/resource UUID and doc fix (#109)
* Fix ResourceTreeSet load error

* Raise error when using unsupported type to create ResourceTreeSet

* Fix children key error

* Fix children key error

* Fix workstation resource not tracking

* Fix workstation deck & children resource dupe

* Fix workstation deck & children resource dupe

* Fix multiple resource error

* Fix resource tree update

* Fix resource tree update

* Force confirm uuid

* Tip more error log

* Refactor Bioyond workstation and experiment workflow (#105)

Refactored the Bioyond workstation classes to improve parameter handling and workflow management. Updated experiment.py to use BioyondReactionStation with deck and material mappings, and enhanced workflow step parameter mapping and execution logic. Adjusted JSON experiment configs, improved workflow sequence handling, and added UUID assignment to PLR materials. Removed unused station_config and material cache logic, and added detailed docstrings and debug output for workflow methods.

* Fix resource get.
Fix resource parent not found.
Mapping uuid for all resources.

* mount parent uuid

* Add logging configuration based on BasicConfig in main function

* fix workstation node error

* fix workstation node error

* Update boot example

* temp fix for resource get

* temp fix for resource get

* provide error info when cant find plr type

* pack repo info

* fix to plr type error

* fix to plr type error

* Update regular container method

* support no size init

* fix comprehensive_station.json

* fix comprehensive_station.json

* fix type conversion

* fix state loading for regular container

* Update deploy-docs.yml

* Update deploy-docs.yml

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
2025-10-16 17:26:07 +08:00
Xuwznln
c85c49817d Fix workstation startup
Update registry
2025-10-13 15:06:30 +08:00
Xuwznln
c70eafa5f0 Fix one-key installation build for windows 2025-10-13 15:06:29 +08:00
Junhan Chang
b64466d443 modify default config 2025-10-13 15:06:26 +08:00
Junhan Chang
ef3f24ed48 add plr_to_bioyond, and refactor bioyond stations 2025-10-13 15:06:25 +08:00
Xuwznln
2a8e8d014b Fix conda pack on windows 2025-10-13 13:19:45 +08:00
Xuwznln
e0da1c7217 Fix one-key installation build
Install conda-pack before pack command

Add conda-pack to base when building one-key installer

Fix param error when using mamba run

Try fix one-key build on linux
2025-10-13 03:33:00 +08:00
hh.(SII)
51d3e61723 fix: rename schema field to resource_schema with serialization and validation aliases (#104)
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
2025-10-13 03:24:20 +08:00
Xuwznln
6b5765bbf3 Complete all one key installation 2025-10-13 03:24:19 +08:00
Xuwznln
eb1f3fbe1c Try fix one-key build on linux 2025-10-13 02:10:05 +08:00
Xuwznln
fb93b1cd94 fix startup env check.
add auto install during one-key installation
2025-10-13 01:59:53 +08:00
Xuwznln
9aeffebde1 0.10.7 Update (#101)
* Cleanup registry to be easy-understanding (#76)

* delete deprecated mock devices

* rename categories

* combine chromatographic devices

* rename rviz simulation nodes

* organic virtual devices

* parse vessel_id

* run registry completion before merge

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>

* fix: workstation handlers and vessel_id parsing

* fix: working dir error when input config path
feat: report publish topic when error

* modify default discovery_interval to 15s

* feat: add trace log level

* feat: 添加ChinWe设备控制类,支持串口通信和电机控制功能 (#79)

* fix: drop_tips not using auto resource select

* fix: discard_tips error

* fix: discard_tips

* fix: prcxi_res

* add: prcxi res
fix: startup slow

* feat: workstation example

* fix pumps and liquid_handler handle

* feat: 优化protocol node节点运行日志

* fix all protocol_compilers and remove deprecated devices

* feat: 新增use_remote_resource参数

* fix and remove redundant info

* bugfixes on organic protocols

* fix filter protocol

* fix protocol node

* 临时兼容错误的driver写法

* fix: prcxi import error

* use call_async in all service to avoid deadlock

* fix: figure_resource

* Update recipe.yaml

* add workstation template and battery example

* feat: add sk & ak

* update workstation base

* Create workstation_architecture.md

* refactor: workstation_base 重构为仅含业务逻辑,通信和子设备管理交给 ProtocolNode

* refactor: ProtocolNode→WorkstationNode

* Add:msgs.action (#83)

* update: Workstation dev 将版本号从 0.10.3 更新为 0.10.4 (#84)

* Add:msgs.action

* update: 将版本号从 0.10.3 更新为 0.10.4

* simplify resource system

* uncompleted refactor

* example for use WorkstationBase

* feat: websocket

* feat: websocket test

* feat: workstation example

* feat: action status

* fix: station自己的方法注册错误

* fix: 还原protocol node处理方法

* fix: build

* fix: missing job_id key

* ws test version 1

* ws test version 2

* ws protocol

* 增加物料关系上传日志

* 增加物料关系上传日志

* 修正物料关系上传

* 修复工站的tracker实例追踪失效问题

* 增加handle检测,增加material edge关系上传

* 修复event loop错误

* 修复edge上报错误

* 修复async错误

* 更新schema的title字段

* 主机节点信息等支持自动刷新

* 注册表编辑器

* 修复status密集发送时,消息出错

* 增加addr参数

* fix: addr param

* fix: addr param

* 取消labid 和 强制config输入

* Add action definitions for LiquidHandlerSetGroup and LiquidHandlerTransferGroup

- Created LiquidHandlerSetGroup.action with fields for group name, wells, and volumes.
- Created LiquidHandlerTransferGroup.action with fields for source and target group names and unit volume.
- Both actions include response fields for return information and success status.

* Add LiquidHandlerSetGroup and LiquidHandlerTransferGroup actions to CMakeLists

* Add set_group and transfer_group methods to PRCXI9300Handler and update liquid_handler.yaml

* result_info改为字典类型

* 新增uat的地址替换

* runze multiple pump support

(cherry picked from commit 49354fcf39)

* remove runze multiple software obtainer

(cherry picked from commit 8bcc92a394)

* support multiple backbone

(cherry picked from commit 4771ff2347)

* Update runze pump format

* Correct runze multiple backbone

* Update runze_multiple_backbone

* Correct runze pump multiple receive method.

* Correct runze pump multiple receive method.

* 对于PRCXI9320的transfer_group,一对多和多对多

* 移除MQTT,更新launch文档,提供注册表示例文件,更新到0.10.5

* fix import error

* fix dupe upload registry

* refactor ws client

* add server timeout

* Fix: run-column with correct vessel id (#86)

* fix run_column

* Update run_column_protocol.py

(cherry picked from commit e5aa4d940a)

* resource_update use resource_add

* 新增版位推荐功能

* 重新规定了版位推荐的入参

* update registry with nested obj

* fix protocol node log_message, added create_resource return value

* fix protocol node log_message, added create_resource return value

* try fix add protocol

* fix resource_add

* 修复移液站错误的aspirate注册表

* Feature/xprbalance-zhida (#80)

* feat(devices): add Zhida GC/MS pretreatment automation workstation

* feat(devices): add mettler_toledo xpr balance

* balance

* 重新补全zhida注册表

* PRCXI9320 json

* PRCXI9320 json

* PRCXI9320 json

* fix resource download

* remove class for resource

* bump version to 0.10.6

* 更新所有注册表

* 修复protocolnode的兼容性

* 修复protocolnode的兼容性

* Update install md

* Add Defaultlayout

* 更新物料接口

* fix dict to tree/nested-dict converter

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* frontend_docs

* create/update resources with POST/PUT for big amount/ small amount data

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

* Workstation templates: Resources and its CRUD, and workstation tasks (#95)

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

---------

Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>

* 更新物料接口

* Workstation dev yb2 (#100)

* Refactor and extend reaction station action messages

* Refactor dispensing station tasks to enhance parameter clarity and add batch processing capabilities

- Updated `create_90_10_vial_feeding_task` to include detailed parameters for 90%/10% vial feeding, improving clarity and usability.
- Introduced `create_batch_90_10_vial_feeding_task` for batch processing of 90%/10% vial feeding tasks with JSON formatted input.
- Added `create_batch_diamine_solution_task` for batch preparation of diamine solution, also utilizing JSON formatted input.
- Refined `create_diamine_solution_task` to include additional parameters for better task configuration.
- Enhanced schema descriptions and default values for improved user guidance.

* 修复to_plr_resources

* add update remove

* 支持选择器注册表自动生成
支持转运物料

* 修复资源添加

* 修复transfer_resource_to_another生成

* 更新transfer_resource_to_another参数,支持spot入参

* 新增test_resource动作

* fix host_node error

* fix host_node test_resource error

* fix host_node test_resource error

* 过滤本地动作

* 移动内部action以兼容host node

* 修复同步任务报错不显示的bug

* feat: 允许返回非本节点物料,后面可以通过decoration进行区分,就不进行warning了

* update todo

* modify bioyond/plr converter, bioyond resource registry, and tests

* pass the tests

* update todo

* add conda-pack-build.yml

* add auto install script for conda-pack-build.yml

(cherry picked from commit 172599adcf)

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* Add version in __init__.py
Update conda-pack-build.yml
Add create_zip_archive.py

* Update conda-pack-build.yml

* Update conda-pack-build.yml (with mamba)

* Update conda-pack-build.yml

* Fix FileNotFoundError

* Try fix 'charmap' codec can't encode characters in position 16-23: character maps to <undefined>

* Fix unilabos msgs search error

* Fix environment_check.py

* Update recipe.yaml

* Update registry. Update uuid loop figure method. Update install docs.

* Fix nested conda pack

* Fix one-key installation path error

* Bump version to 0.10.7

* Workshop bj (#99)

* Add LaiYu Liquid device integration and tests

Introduce LaiYu Liquid device implementation, including backend, controllers, drivers, configuration, and resource files. Add hardware connection, tip pickup, and simplified test scripts, as well as experiment and registry configuration for LaiYu Liquid. Documentation and .gitignore for the device are also included.

* feat(LaiYu_Liquid): 重构设备模块结构并添加硬件文档

refactor: 重新组织LaiYu_Liquid模块目录结构
docs: 添加SOPA移液器和步进电机控制指令文档
fix: 修正设备配置中的最大体积默认值
test: 新增工作台配置测试用例
chore: 删除过时的测试脚本和配置文件

* add

* 重构: 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py 并更新所有导入引用

- 使用 git mv 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py
- 更新所有相关文件中的导入引用
- 保持代码功能不变,仅改善命名一致性
- 测试确认所有导入正常工作

* 修复: 在 core/__init__.py 中添加 LaiYuLiquidBackend 导出

- 添加 LaiYuLiquidBackend 到导入列表
- 添加 LaiYuLiquidBackend 到 __all__ 导出列表
- 确保所有主要类都可以正确导入

* 修复大小写文件夹名字

* 电池装配工站二次开发教程(带目录)上传至dev (#94)

* 电池装配工站二次开发教程

* Update intro.md

* 物料教程

* 更新物料教程,json格式注释

* Update prcxi driver & fix transfer_liquid mix_times (#90)

* Update prcxi driver & fix transfer_liquid mix_times

* fix: correct mix_times type

* Update liquid_handler registry

* test: prcxi.py

* Update registry from pr

* fix ony-key script not exist

* clean files

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Guangxin Zhang <guangxin.zhang.bio@gmail.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>
Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: LccLink <1951855008@qq.com>
Co-authored-by: lixinyu1011 <61094742+lixinyu1011@users.noreply.github.com>
Co-authored-by: shiyubo0410 <shiyubo@dp.tech>
2025-10-12 23:34:26 +08:00
281 changed files with 32455 additions and 42198 deletions

View File

@@ -1,62 +0,0 @@
# unilabos: Production package (depends on unilabos-env + pip unilabos)
# For production deployment
package:
name: unilabos
version: 0.10.19
source:
path: ../../unilabos
target_directory: unilabos
build:
python:
entry_points:
- unilab = unilabos.app.main:main
script:
- set PIP_NO_INDEX=
- if: win
then:
- copy %RECIPE_DIR%\..\..\MANIFEST.in %SRC_DIR%
- copy %RECIPE_DIR%\..\..\setup.cfg %SRC_DIR%
- copy %RECIPE_DIR%\..\..\setup.py %SRC_DIR%
- pip install %SRC_DIR%
- if: unix
then:
- cp $RECIPE_DIR/../../MANIFEST.in $SRC_DIR
- cp $RECIPE_DIR/../../setup.cfg $SRC_DIR
- cp $RECIPE_DIR/../../setup.py $SRC_DIR
- pip install $SRC_DIR
requirements:
host:
- python ==3.11.14
- pip
- setuptools
- zstd
- zstandard
run:
- zstd
- zstandard
- networkx
- typing_extensions
- websockets
- pint
- fastapi
- jinja2
- requests
- uvicorn
- if: not osx
then:
- opcua
- pyserial
- pandas
- pymodbus
- matplotlib
- pylibftdi
- uni-lab::unilabos-env ==0.10.19
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS - Production package with minimal ROS2 dependencies"

View File

@@ -1,39 +0,0 @@
# unilabos-env: conda environment dependencies (ROS2 + conda packages)
package:
name: unilabos-env
version: 0.10.19
build:
noarch: generic
requirements:
run:
# Python
- zstd
- zstandard
- conda-forge::python ==3.11.14
- conda-forge::opencv
# ROS2 dependencies (from ci-check.yml)
- robostack-staging::ros-humble-ros-core
- robostack-staging::ros-humble-action-msgs
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros-humble-control-msgs
- robostack-staging::ros-humble-nav2-msgs
- robostack-staging::ros-humble-cv-bridge
- robostack-staging::ros-humble-vision-opencv
- robostack-staging::ros-humble-tf-transformations
- robostack-staging::ros-humble-moveit-msgs
- robostack-staging::ros-humble-tf2-ros
- robostack-staging::ros-humble-tf2-ros-py
- conda-forge::transforms3d
- conda-forge::uv
# UniLabOS custom messages
- uni-lab::ros-humble-unilabos-msgs
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS Environment - ROS2 and conda dependencies"

View File

@@ -1,42 +0,0 @@
# unilabos-full: Full package with all features
# Depends on unilabos + complete ROS2 desktop + dev tools
package:
name: unilabos-full
version: 0.10.19
build:
noarch: generic
requirements:
run:
# Base unilabos package (includes unilabos-env)
- uni-lab::unilabos ==0.10.19
# Documentation tools
- sphinx
- sphinx_rtd_theme
# Web UI
- gradio
- flask
# Interactive development
- ipython
- jupyter
- jupyros
- colcon-common-extensions
# ROS2 full desktop (includes rviz2, gazebo, etc.)
- robostack-staging::ros-humble-desktop-full
# Navigation and motion control
- ros-humble-navigation2
- ros-humble-ros2-control
- ros-humble-robot-state-publisher
- ros-humble-joint-state-publisher
# MoveIt motion planning
- ros-humble-moveit
- ros-humble-moveit-servo
# Simulation
- ros-humble-simulation
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS Full - Complete package with ROS2 Desktop, MoveIt, Navigation2, Gazebo, Jupyter"

92
.conda/recipe.yaml Normal file
View File

@@ -0,0 +1,92 @@
package:
name: unilabos
version: 0.10.13
source:
path: ../unilabos
target_directory: unilabos
build:
python:
entry_points:
- unilab = unilabos.app.main:main
script:
- set PIP_NO_INDEX=
- if: win
then:
- copy %RECIPE_DIR%\..\MANIFEST.in %SRC_DIR%
- copy %RECIPE_DIR%\..\setup.cfg %SRC_DIR%
- copy %RECIPE_DIR%\..\setup.py %SRC_DIR%
- call %PYTHON% -m pip install %SRC_DIR%
- if: unix
then:
- cp $RECIPE_DIR/../MANIFEST.in $SRC_DIR
- cp $RECIPE_DIR/../setup.cfg $SRC_DIR
- cp $RECIPE_DIR/../setup.py $SRC_DIR
- $PYTHON -m pip install $SRC_DIR
requirements:
host:
- python ==3.11.11
- pip
- setuptools
- zstd
- zstandard
run:
- conda-forge::python ==3.11.11
- compilers
- cmake
- zstd
- zstandard
- ninja
- if: unix
then:
- make
- sphinx
- sphinx_rtd_theme
- numpy
- scipy
- pandas
- networkx
- matplotlib
- pint
- pyserial
- pyusb
- pylibftdi
- pymodbus
- python-can
- pyvisa
- opencv
- pydantic
- fastapi
- uvicorn
- gradio
- flask
- websockets
- ipython
- jupyter
- jupyros
- colcon-common-extensions
- robostack-staging::ros-humble-desktop-full
- robostack-staging::ros-humble-control-msgs
- robostack-staging::ros-humble-sensor-msgs
- robostack-staging::ros-humble-trajectory-msgs
- ros-humble-navigation2
- ros-humble-ros2-control
- ros-humble-robot-state-publisher
- ros-humble-joint-state-publisher
- ros-humble-rosbridge-server
- ros-humble-cv-bridge
- ros-humble-tf2
- ros-humble-moveit
- ros-humble-moveit-servo
- ros-humble-simulation
- ros-humble-tf-transformations
- transforms3d
- uni-lab::ros-humble-unilabos-msgs
about:
repository: https://github.com/dptech-corp/Uni-Lab-OS
license: GPL-3.0-only
description: "Uni-Lab-OS"

View File

@@ -0,0 +1,9 @@
@echo off
setlocal enabledelayedexpansion
REM upgrade pip
"%PREFIX%\python.exe" -m pip install --upgrade pip
REM install extra deps
"%PREFIX%\python.exe" -m pip install paho-mqtt opentrons_shared_data
"%PREFIX%\python.exe" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -euxo pipefail
# make sure pip is available
"$PREFIX/bin/python" -m pip install --upgrade pip
# install extra deps
"$PREFIX/bin/python" -m pip install paho-mqtt opentrons_shared_data
"$PREFIX/bin/python" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

View File

@@ -1,160 +0,0 @@
---
name: add-device
description: Guide for adding new devices to Uni-Lab-OS (接入新设备). Uses @device decorator + AST auto-scanning instead of manual YAML. Walks through device category, communication protocol, driver creation with decorators, and graph file setup. Use when the user wants to add/integrate a new device, create a device driver, write a device class, or mentions 接入设备/添加设备/设备驱动/物模型.
---
# 添加新设备到 Uni-Lab-OS
**第一步:** 使用 Read 工具读取 `docs/ai_guides/add_device.md`,获取完整的设备接入指南。
该指南包含设备类别(物模型)列表、通信协议模板、常见错误检查清单等。搜索 `unilabos/devices/` 获取已有设备的实现参考。
---
## 装饰器参考
### @device — 设备类装饰器
```python
from unilabos.registry.decorators import device
# 单设备
@device(
id="my_device.vendor", # 注册表唯一标识(必填)
category=["temperature"], # 分类标签列表(必填)
description="设备描述", # 设备描述
display_name="显示名称", # UI 显示名称(默认用 id
icon="DeviceIcon.webp", # 图标文件名
version="1.0.0", # 版本号
device_type="python", # "python" 或 "ros2"
handles=[...], # 端口列表InputHandle / OutputHandle
model={...}, # 3D 模型配置
hardware_interface=HardwareInterface(...), # 硬件通信接口
)
# 多设备(同一个类注册多个设备 ID各自有不同的 handles 等配置)
@device(
ids=["pump.vendor.model_A", "pump.vendor.model_B"],
id_meta={
"pump.vendor.model_A": {"handles": [...], "description": "型号 A"},
"pump.vendor.model_B": {"handles": [...], "description": "型号 B"},
},
category=["pump_and_valve"],
)
```
### @action — 动作方法装饰器
```python
from unilabos.registry.decorators import action
@action # 无参:注册为 UniLabJsonCommand 动作
@action() # 同上
@action(description="执行操作") # 带描述
@action(
action_type=HeatChill, # 指定 ROS Action 消息类型
goal={"temperature": "temp"}, # Goal 字段映射
feedback={}, # Feedback 字段映射
result={}, # Result 字段映射
handles=[...], # 动作级别端口
goal_default={"temp": 25.0}, # Goal 默认值
placeholder_keys={...}, # 参数占位符
always_free=True, # 不受排队限制
auto_prefix=True, # 强制使用 auto- 前缀
parent=True, # 从父类 MRO 获取参数签名
)
```
**自动识别规则:**
-`@action` 的公开方法 → 注册为动作(方法名即动作名)
- **不带 `@action` 的公开方法** → 自动注册为 `auto-{方法名}` 动作
- `_` 开头的方法 → 不扫描
- `@not_action` 标记的方法 → 排除
### @topic_config — 状态属性配置
```python
from unilabos.registry.decorators import topic_config
@property
@topic_config(
period=5.0, # 发布周期(秒),默认 5.0
print_publish=False, # 是否打印发布日志
qos=10, # QoS 深度,默认 10
name="custom_name", # 自定义发布名称(默认用属性名)
)
def temperature(self) -> float:
return self.data.get("temperature", 0.0)
```
### 辅助装饰器
```python
from unilabos.registry.decorators import not_action, always_free
@not_action # 标记为非动作post_init、辅助方法等
@always_free # 标记为不受排队限制(查询类操作)
```
---
## 设备模板
```python
import logging
from typing import Any, Dict, Optional
from unilabos.ros.nodes.base_device_node import BaseROS2DeviceNode
from unilabos.registry.decorators import device, action, topic_config, not_action
@device(id="my_device", category=["my_category"], description="设备描述")
class MyDevice:
_ros_node: BaseROS2DeviceNode
def __init__(self, device_id: Optional[str] = None, config: Optional[Dict[str, Any]] = None, **kwargs):
self.device_id = device_id or "my_device"
self.config = config or {}
self.logger = logging.getLogger(f"MyDevice.{self.device_id}")
self.data: Dict[str, Any] = {"status": "Idle"}
@not_action
def post_init(self, ros_node: BaseROS2DeviceNode) -> None:
self._ros_node = ros_node
@action
async def initialize(self) -> bool:
self.data["status"] = "Ready"
return True
@action
async def cleanup(self) -> bool:
self.data["status"] = "Offline"
return True
@action(description="执行操作")
def my_action(self, param: float = 0.0, name: str = "") -> Dict[str, Any]:
"""带 @action 装饰器 → 注册为 'my_action' 动作"""
return {"success": True}
def get_info(self) -> Dict[str, Any]:
"""无 @action → 自动注册为 'auto-get_info' 动作"""
return {"device_id": self.device_id}
@property
@topic_config()
def status(self) -> str:
return self.data.get("status", "Idle")
@property
@topic_config(period=2.0)
def temperature(self) -> float:
return self.data.get("temperature", 0.0)
```
### 要点
- `_ros_node: BaseROS2DeviceNode` 类型标注放在类体顶部
- `__init__` 签名固定为 `(self, device_id=None, config=None, **kwargs)`
- `post_init``@not_action` 标记,参数类型标注为 `BaseROS2DeviceNode`
- 运行时状态存储在 `self.data` 字典中
- 设备文件放在 `unilabos/devices/<category>/` 目录下

View File

@@ -1,351 +0,0 @@
---
name: add-resource
description: Guide for adding new resources (materials, bottles, carriers, decks, warehouses) to Uni-Lab-OS (添加新物料/资源). Uses @resource decorator for AST auto-scanning. Covers Bottle, Carrier, Deck, WareHouse definitions. Use when the user wants to add resources, define materials, create a deck layout, add bottles/carriers/plates, or mentions 物料/资源/resource/bottle/carrier/deck/plate/warehouse.
---
# 添加新物料资源
Uni-Lab-OS 的资源体系基于 PyLabRobot通过扩展实现 Bottle、Carrier、WareHouse、Deck 等实验室物料管理。使用 `@resource` 装饰器注册AST 自动扫描生成注册表条目。
---
## 资源类型
| 类型 | 基类 | 用途 | 示例 |
|------|------|------|------|
| **Bottle** | `Well` (PyLabRobot) | 单个容器(瓶、小瓶、烧杯、反应器) | 试剂瓶、粉末瓶 |
| **BottleCarrier** | `ItemizedCarrier` | 多槽位载架(放多个 Bottle | 6 位试剂架、枪头盒 |
| **WareHouse** | `ItemizedCarrier` | 堆栈/仓库(放多个 Carrier | 4x4 堆栈 |
| **Deck** | `Deck` (PyLabRobot) | 工作站台面(放多个 WareHouse | 反应站 Deck |
**层级关系:** `Deck``WareHouse``BottleCarrier``Bottle`
WareHouse 本质上和 Site 是同一概念 — 都是定义一组固定的放置位slot只不过 WareHouse 多嵌套了一层 Deck。两者都需要开发者根据实际物理尺寸自行计算各 slot 的偏移坐标。
---
## @resource 装饰器
```python
from unilabos.registry.decorators import resource
@resource(
id="my_resource_id", # 注册表唯一标识(必填)
category=["bottles"], # 分类标签列表(必填)
description="资源描述",
icon="", # 图标
version="1.0.0",
handles=[...], # 端口列表InputHandle / OutputHandle
model={...}, # 3D 模型配置
class_type="pylabrobot", # "python" / "pylabrobot" / "unilabos"
)
```
---
## 创建规范
### 命名规则
1. **`name` 参数作为前缀**:所有工厂函数必须接受 `name: str` 参数,创建子物料时以 `name` 作为前缀,确保实例名在运行时全局唯一
2. **Bottle 命名约定**:试剂瓶-Bottle烧杯-Beaker烧瓶-Flask小瓶-Vial
3. **函数名 = `@resource(id=...)`**:工厂函数名与注册表 id 保持一致
### 子物料命名示例
```python
# Carrier 内部的 sites 用 name 前缀
for k, v in sites.items():
v.name = f"{name}_{v.name}" # "堆栈1左_A01", "堆栈1左_B02" ...
# Carrier 中放置 Bottle 时用 name 前缀
carrier[0] = My_Reagent_Bottle(f"{name}_flask_1") # "堆栈1左_flask_1"
carrier[i] = My_Solid_Vial(f"{name}_vial_{ordering[i]}") # "堆栈1左_vial_A1"
# create_homogeneous_resources 使用 name_prefix
sites=create_homogeneous_resources(
klass=ResourceHolder,
locations=[...],
name_prefix=name, # 自动生成 "{name}_0", "{name}_1" ...
)
# Deck setup 中用仓库名称作为 name 传入
self.warehouses = {
"堆栈1左": my_warehouse_4x4("堆栈1左"), # WareHouse.name = "堆栈1左"
"试剂堆栈": my_reagent_stack("试剂堆栈"), # WareHouse.name = "试剂堆栈"
}
```
### 其他规范
- **max_volume 单位为 μL**500mL = 500000
- **尺寸单位为 mm**`diameter`, `height`, `size_x/y/z`, `dx/dy/dz`
- **BottleCarrier 必须设置 `num_items_x/y/z`**:用于前端渲染布局
- **Deck 的 `__init__` 必须接受 `setup=False`**:图文件中 `config.setup=true` 触发 `setup()`
- **按项目分组文件**:同一工作站的资源放在 `unilabos/resources/<project>/`
- **`__init__` 必须接受 `serialize()` 输出的所有字段**`serialize()` 输出会作为 `config` 回传到 `__init__`,因此必须通过显式参数或 `**kwargs` 接受,否则反序列化会报错
- **持久化运行时状态用 `serialize_state()`**:通过 `_unilabos_state` 字典存储可变信息(如物料内容、液体量),只存 JSON 可序列化的基本类型
---
## 资源模板
### Bottle
```python
from unilabos.registry.decorators import resource
from unilabos.resources.itemized_carrier import Bottle
@resource(id="My_Reagent_Bottle", category=["bottles"], description="我的试剂瓶")
def My_Reagent_Bottle(
name: str,
diameter: float = 70.0,
height: float = 120.0,
max_volume: float = 500000.0,
barcode: str = None,
) -> Bottle:
return Bottle(
name=name,
diameter=diameter,
height=height,
max_volume=max_volume,
barcode=barcode,
model="My_Reagent_Bottle",
)
```
**Bottle 参数:**
- `name`: 实例名称(运行时唯一,由上层 Carrier 以前缀方式传入)
- `diameter`: 瓶体直径 (mm)
- `height`: 瓶体高度 (mm)
- `max_volume`: 最大容积(**μL**500mL = 500000
- `barcode`: 条形码(可选)
### BottleCarrier
```python
from pylabrobot.resources import ResourceHolder
from pylabrobot.resources.carrier import create_ordered_items_2d
from unilabos.resources.itemized_carrier import BottleCarrier
from unilabos.registry.decorators import resource
@resource(id="My_6SlotCarrier", category=["bottle_carriers"], description="六槽位载架")
def My_6SlotCarrier(name: str) -> BottleCarrier:
sites = create_ordered_items_2d(
klass=ResourceHolder,
num_items_x=3, num_items_y=2,
dx=10.0, dy=10.0, dz=5.0,
item_dx=42.0, item_dy=35.0,
size_x=20.0, size_y=20.0, size_z=50.0,
)
# 子 site 用 name 作为前缀
for k, v in sites.items():
v.name = f"{name}_{v.name}"
carrier = BottleCarrier(
name=name, size_x=146.0, size_y=80.0, size_z=55.0,
sites=sites, model="My_6SlotCarrier",
)
carrier.num_items_x = 3
carrier.num_items_y = 2
carrier.num_items_z = 1
# 放置 Bottle 时用 name 作为前缀
ordering = ["A1", "B1", "A2", "B2", "A3", "B3"]
for i in range(6):
carrier[i] = My_Reagent_Bottle(f"{name}_vial_{ordering[i]}")
return carrier
```
### WareHouse / Deck 放置位
WareHouse 和 Site 本质上是同一概念都是定义一组固定放置位slot根据物理尺寸自行批量计算偏移坐标。WareHouse 只是多嵌套了一层 Deck 而已。推荐开发者直接根据实物测量数据计算各 slot 偏移量。
#### WareHouse使用 warehouse_factory
```python
from unilabos.resources.warehouse import warehouse_factory
from unilabos.registry.decorators import resource
@resource(id="my_warehouse_4x4", category=["warehouse"], description="4x4 堆栈仓库")
def my_warehouse_4x4(name: str) -> "WareHouse":
return warehouse_factory(
name=name,
num_items_x=4, num_items_y=4, num_items_z=1,
dx=10.0, dy=10.0, dz=10.0, # 第一个 slot 的起始偏移
item_dx=147.0, item_dy=106.0, item_dz=130.0, # slot 间距
resource_size_x=127.0, resource_size_y=85.0, resource_size_z=100.0, # slot 尺寸
model="my_warehouse_4x4",
col_offset=0, # 列标签起始偏移0 → A01, 4 → A05
layout="row-major", # "row-major" 行优先 / "col-major" 列优先 / "vertical-col-major" 竖向
)
```
`warehouse_factory` 参数说明:
- `dx/dy/dz`:第一个 slot 相对 WareHouse 原点的偏移mm
- `item_dx/item_dy/item_dz`:相邻 slot 间距mm需根据实际物理间距测量
- `resource_size_x/y/z`:每个 slot 的可放置区域尺寸
- `layout`:影响 slot 标签和坐标映射
- `"row-major"`A01,A02,...,B01,B02,...(行优先,适合横向排列)
- `"col-major"`A01,B01,...,A02,B02,...(列优先)
- `"vertical-col-major"`竖向排列y 坐标反向
#### Deck 组装 WareHouse
Deck 通过 `setup()` 将多个 WareHouse 放置到指定坐标:
```python
from pylabrobot.resources import Deck, Coordinate
from unilabos.registry.decorators import resource
@resource(id="MyStation_Deck", category=["deck"], description="我的工作站 Deck")
class MyStation_Deck(Deck):
def __init__(self, name="MyStation_Deck", size_x=2700.0, size_y=1080.0, size_z=1500.0,
category="deck", setup=False, **kwargs) -> None:
super().__init__(name=name, size_x=size_x, size_y=size_y, size_z=size_z)
if setup:
self.setup()
def setup(self) -> None:
self.warehouses = {
"堆栈1左": my_warehouse_4x4("堆栈1左"),
"堆栈1右": my_warehouse_4x4("堆栈1右"),
}
self.warehouse_locations = {
"堆栈1左": Coordinate(-200.0, 400.0, 0.0), # 自行测量计算
"堆栈1右": Coordinate(2350.0, 400.0, 0.0),
}
for wh_name, wh in self.warehouses.items():
self.assign_child_resource(wh, location=self.warehouse_locations[wh_name])
```
#### Site 模式(前端定向放置)
适用于有固定孔位/槽位的设备(如移液站 PRCXI 9300Deck 通过 `sites` 列表定义前端展示的放置位,前端据此渲染可拖拽的孔位布局:
```python
import collections
from typing import Any, Dict, List, Optional
from pylabrobot.resources import Deck, Resource, Coordinate
from unilabos.registry.decorators import resource
@resource(id="MyLabDeck", category=["deck"], description="带 Site 定向放置的 Deck")
class MyLabDeck(Deck):
# 根据设备台面实测批量计算各 slot 坐标偏移
_DEFAULT_SITE_POSITIONS = [
(0, 0, 0), (138, 0, 0), (276, 0, 0), (414, 0, 0), # T1-T4
(0, 96, 0), (138, 96, 0), (276, 96, 0), (414, 96, 0), # T5-T8
]
_DEFAULT_SITE_SIZE = {"width": 128.0, "height": 86.0, "depth": 0}
_DEFAULT_CONTENT_TYPE = ["plate", "tip_rack", "tube_rack", "adaptor"]
def __init__(self, name: str, size_x: float, size_y: float, size_z: float,
sites: Optional[List[Dict[str, Any]]] = None, **kwargs):
super().__init__(size_x, size_y, size_z, name)
if sites is not None:
self.sites = [dict(s) for s in sites]
else:
self.sites = []
for i, (x, y, z) in enumerate(self._DEFAULT_SITE_POSITIONS):
self.sites.append({
"label": f"T{i + 1}", # 前端显示的槽位标签
"visible": True, # 是否在前端可见
"position": {"x": x, "y": y, "z": z}, # 槽位物理坐标
"size": dict(self._DEFAULT_SITE_SIZE), # 槽位尺寸
"content_type": list(self._DEFAULT_CONTENT_TYPE), # 允许放入的物料类型
})
self._ordering = collections.OrderedDict(
(site["label"], None) for site in self.sites
)
def assign_child_resource(self, resource: Resource,
location: Optional[Coordinate] = None,
reassign: bool = True,
spot: Optional[int] = None):
idx = spot
if spot is None:
for i, site in enumerate(self.sites):
if site.get("label") == resource.name:
idx = i
break
if idx is None:
for i in range(len(self.sites)):
if self._get_site_resource(i) is None:
idx = i
break
if idx is None:
raise ValueError(f"No available site for '{resource.name}'")
loc = Coordinate(**self.sites[idx]["position"])
super().assign_child_resource(resource, location=loc, reassign=reassign)
def serialize(self) -> dict:
data = super().serialize()
sites_out = []
for i, site in enumerate(self.sites):
occupied = self._get_site_resource(i)
sites_out.append({
"label": site["label"],
"visible": site.get("visible", True),
"occupied_by": occupied.name if occupied else None,
"position": site["position"],
"size": site["size"],
"content_type": site["content_type"],
})
data["sites"] = sites_out
return data
```
**Site 字段说明:**
| 字段 | 类型 | 说明 |
|------|------|------|
| `label` | str | 槽位标签(如 `"T1"`),前端显示名称,也用于匹配 resource.name |
| `visible` | bool | 是否在前端可见 |
| `position` | dict | 物理坐标 `{x, y, z}`mm需自行测量计算偏移 |
| `size` | dict | 槽位尺寸 `{width, height, depth}`mm |
| `content_type` | list | 允许放入的物料类型,如 `["plate", "tip_rack", "tube_rack", "adaptor"]` |
**参考实现:** `unilabos/devices/liquid_handling/prcxi/prcxi.py` 中的 `PRCXI9300Deck`4x4 共 16 个 site
---
## 文件位置
```
unilabos/resources/
├── <project>/ # 按项目分组
│ ├── bottles.py # Bottle 工厂函数
│ ├── bottle_carriers.py # Carrier 工厂函数
│ ├── warehouses.py # WareHouse 工厂函数
│ └── decks.py # Deck 类定义
```
---
## 验证
```bash
# 资源可导入
python -c "from unilabos.resources.my_project.bottles import My_Reagent_Bottle; print(My_Reagent_Bottle('test'))"
# 启动测试AST 自动扫描)
unilab -g <graph>.json
```
仅在以下情况仍需 YAML第三方库资源如 pylabrobot 内置资源,无 `@resource` 装饰器)。
---
## 关键路径
| 内容 | 路径 |
|------|------|
| Bottle/Carrier 基类 | `unilabos/resources/itemized_carrier.py` |
| WareHouse 基类 + 工厂 | `unilabos/resources/warehouse.py` |
| PLR 注册 | `unilabos/resources/plr_additional_res_reg.py` |
| 装饰器定义 | `unilabos/registry/decorators.py` |

View File

@@ -1,292 +0,0 @@
# 资源高级参考
本文件是 SKILL.md 的补充,包含类继承体系、序列化/反序列化、Bioyond 物料同步、非瓶类资源和仓库工厂模式。Agent 在需要实现这些功能时按需阅读。
---
## 1. 类继承体系
```
PyLabRobot
├── Resource (PLR 基类)
│ ├── Well
│ │ └── Bottle (unilabos) → 瓶/小瓶/烧杯/反应器
│ ├── Deck
│ │ └── 自定义 Deck 类 (unilabos) → 工作站台面
│ ├── ResourceHolder → 槽位占位符
│ └── Container
│ └── Battery (unilabos) → 组装好的电池
├── ItemizedCarrier (unilabos, 继承 Resource)
│ ├── BottleCarrier (unilabos) → 瓶载架
│ └── WareHouse (unilabos) → 堆栈仓库
├── ItemizedResource (PLR)
│ └── MagazineHolder (unilabos) → 子弹夹载架
└── ResourceStack (PLR)
└── Magazine (unilabos) → 子弹夹洞位
```
### Bottle 类细节
```python
class Bottle(Well):
def __init__(self, name, diameter, height, max_volume,
size_x=0.0, size_y=0.0, size_z=0.0,
barcode=None, category="container", model=None, **kwargs):
super().__init__(
name=name,
size_x=diameter, # PLR 用 diameter 作为 size_x/size_y
size_y=diameter,
size_z=height, # PLR 用 height 作为 size_z
max_volume=max_volume,
category=category,
model=model,
bottom_type="flat",
cross_section_type="circle"
)
```
注意 `size_x = size_y = diameter``size_z = height`
### ItemizedCarrier 核心方法
| 方法 | 说明 |
|------|------|
| `__getitem__(identifier)` | 通过索引或 Excel 标识(如 `"A01"`)访问槽位 |
| `__setitem__(identifier, resource)` | 向槽位放入资源 |
| `get_child_identifier(child)` | 获取子资源的标识符 |
| `capacity` | 总槽位数 |
| `sites` | 所有槽位字典 |
---
## 2. 序列化与反序列化
### PLR ↔ UniLab 转换
| 函数 | 位置 | 方向 |
|------|------|------|
| `ResourceTreeSet.from_plr_resources(resources)` | `resource_tracker.py` | PLR → UniLab |
| `ResourceTreeSet.to_plr_resources()` | `resource_tracker.py` | UniLab → PLR |
### `from_plr_resources` 流程
```
PLR Resource
↓ build_uuid_mapping (递归生成 UUID)
↓ resource.serialize() → dict
↓ resource.serialize_all_state() → states
↓ resource_plr_inner (递归构建 ResourceDictInstance)
ResourceTreeSet
```
关键:每个 PLR 资源通过 `unilabos_uuid` 属性携带 UUID`unilabos_extra` 携带扩展数据(如 `class` 名)。
### `to_plr_resources` 流程
```
ResourceTreeSet
↓ collect_node_data (收集 UUID、状态、扩展数据)
↓ node_to_plr_dict (转为 PLR 字典格式)
↓ find_subclass(type_name, PLRResource) (查找 PLR 子类)
↓ sub_cls.deserialize(plr_dict) (反序列化)
↓ loop_set_uuid, loop_set_extra (递归设置 UUID 和扩展)
PLR Resource
```
### Bottle 序列化
```python
class Bottle(Well):
def serialize(self) -> dict:
data = super().serialize()
return {**data, "diameter": self.diameter, "height": self.height}
@classmethod
def deserialize(cls, data: dict, allow_marshal=False):
barcode_data = data.pop("barcode", None)
instance = super().deserialize(data, allow_marshal=allow_marshal)
if barcode_data and isinstance(barcode_data, str):
instance.barcode = barcode_data
return instance
```
---
## 3. Bioyond 物料同步
### 双向转换函数
| 函数 | 位置 | 方向 |
|------|------|------|
| `resource_bioyond_to_plr(materials, type_mapping, deck)` | `graphio.py` | Bioyond → PLR |
| `resource_plr_to_bioyond(resources, type_mapping, warehouse_mapping)` | `graphio.py` | PLR → Bioyond |
### `resource_bioyond_to_plr` 流程
```
Bioyond 物料列表
↓ reverse_type_mapping: {typeName → (model, UUID)}
↓ 对每个物料:
typeName → 查映射 → model (如 "BIOYOND_PolymerStation_Reactor")
initialize_resource({"name": unique_name, "class": model})
↓ 设置 unilabos_extra (material_bioyond_id, material_bioyond_name 等)
↓ 处理 detail (子物料/坐标)
↓ 按 locationName 放入 deck.warehouses 对应槽位
PLR 资源列表
```
### `resource_plr_to_bioyond` 流程
```
PLR 资源列表
↓ 遍历每个资源:
载架(capacity > 1): 生成 details 子物料 + 坐标
单瓶: 直接映射
↓ type_mapping 查找 typeId
↓ warehouse_mapping 查找位置 UUID
↓ 组装 Bioyond 格式 (name, typeName, typeId, quantity, Parameters, locations)
Bioyond 物料列表
```
### BioyondResourceSynchronizer
工作站通过 `ResourceSynchronizer` 自动同步物料:
```python
class BioyondResourceSynchronizer(ResourceSynchronizer):
def sync_from_external(self) -> bool:
all_data = []
all_data.extend(api_client.stock_material('{"typeMode": 0}')) # 耗材
all_data.extend(api_client.stock_material('{"typeMode": 1}')) # 样品
all_data.extend(api_client.stock_material('{"typeMode": 2}')) # 试剂
unilab_resources = resource_bioyond_to_plr(
all_data,
type_mapping=self.workstation.bioyond_config["material_type_mappings"],
deck=self.workstation.deck
)
# 更新 deck 上的资源
```
---
## 4. 非瓶类资源
### ElectrodeSheet极片
路径:`unilabos/resources/battery/electrode_sheet.py`
```python
class ElectrodeSheet(ResourcePLR):
"""片状材料(极片、隔膜、弹片、垫片等)"""
_unilabos_state = {
"diameter": 0.0,
"thickness": 0.0,
"mass": 0.0,
"material_type": "",
"color": "",
"info": "",
}
```
工厂函数:`PositiveCan`, `PositiveElectrode`, `NegativeCan`, `NegativeElectrode`, `SpringWasher`, `FlatWasher`, `AluminumFoil`
### Battery电池
```python
class Battery(Container):
"""组装好的电池"""
_unilabos_state = {
"color": "",
"electrolyte_name": "",
"open_circuit_voltage": 0.0,
}
```
### Magazine / MagazineHolder子弹夹
```python
class Magazine(ResourceStack):
"""子弹夹洞位,可堆叠 ElectrodeSheet"""
# direction, max_sheets
class MagazineHolder(ItemizedResource):
"""多洞位子弹夹"""
# hole_diameter, hole_depth, max_sheets_per_hole
```
工厂函数 `magazine_factory()``create_homogeneous_resources` 生成洞位,可选预填 `ElectrodeSheet``Battery`
---
## 5. 仓库工厂模式参考
### 实际 warehouse 工厂函数示例
```python
# 行优先 4x4 仓库
def bioyond_warehouse_1x4x4(name: str) -> WareHouse:
return warehouse_factory(
name=name,
num_items_x=4, num_items_y=4, num_items_z=1,
dx=10.0, dy=10.0, dz=10.0,
item_dx=147.0, item_dy=106.0, item_dz=130.0,
layout="row-major", # A01,A02,A03,A04, B01,...
)
# 右侧 4x4 仓库(列名偏移)
def bioyond_warehouse_1x4x4_right(name: str) -> WareHouse:
return warehouse_factory(
name=name,
num_items_x=4, num_items_y=4, num_items_z=1,
dx=10.0, dy=10.0, dz=10.0,
item_dx=147.0, item_dy=106.0, item_dz=130.0,
col_offset=4, # A05,A06,A07,A08
layout="row-major",
)
# 竖向仓库(站内试剂存放)
def bioyond_warehouse_reagent_storage(name: str) -> WareHouse:
return warehouse_factory(
name=name,
num_items_x=1, num_items_y=2, num_items_z=1,
dx=10.0, dy=10.0, dz=10.0,
item_dx=147.0, item_dy=106.0, item_dz=130.0,
layout="vertical-col-major",
)
# 行偏移F 行开始)
def bioyond_warehouse_5x3x1(name: str, row_offset: int = 0) -> WareHouse:
return warehouse_factory(
name=name,
num_items_x=3, num_items_y=5, num_items_z=1,
dx=10.0, dy=10.0, dz=10.0,
item_dx=159.0, item_dy=183.0, item_dz=130.0,
row_offset=row_offset, # 0→A行起5→F行起
layout="row-major",
)
```
### layout 类型说明
| layout | 命名顺序 | 适用场景 |
|--------|---------|---------|
| `col-major` (默认) | A01,B01,C01,D01, A02,B02,... | 列优先,标准堆栈 |
| `row-major` | A01,A02,A03,A04, B01,B02,... | 行优先Bioyond 前端展示 |
| `vertical-col-major` | 竖向排列,标签从底部开始 | 竖向仓库(试剂存放、测密度) |
---
## 6. 关键路径
| 内容 | 路径 |
|------|------|
| Bottle/Carrier 基类 | `unilabos/resources/itemized_carrier.py` |
| WareHouse 类 + 工厂 | `unilabos/resources/warehouse.py` |
| ResourceTreeSet 转换 | `unilabos/resources/resource_tracker.py` |
| Bioyond 物料转换 | `unilabos/resources/graphio.py` |
| Bioyond 仓库定义 | `unilabos/resources/bioyond/warehouses.py` |
| 电池资源 | `unilabos/resources/battery/` |
| PLR 注册 | `unilabos/resources/plr_additional_res_reg.py` |

View File

@@ -1,626 +0,0 @@
---
name: add-workstation
description: Guide for adding new workstations to Uni-Lab-OS (接入新工作站). Uses @device decorator + AST auto-scanning. Walks through workstation type, sub-device composition, driver creation, deck setup, and graph file. Use when the user wants to add a workstation, create a workstation driver, configure a station with sub-devices, or mentions 工作站/工站/station/workstation.
---
# Uni-Lab-OS 工作站接入指南
工作站workstation是组合多个子设备的大型设备拥有独立的物料管理系统和工作流引擎。使用 `@device` 装饰器注册AST 自动扫描生成注册表。
---
## 工作站类型
| 类型 | 基类 | 适用场景 |
| ------------------- | ----------------- | ---------------------------------- |
| **Protocol 工作站** | `ProtocolNode` | 标准化学操作协议(泵转移、过滤等) |
| **外部系统工作站** | `WorkstationBase` | 与外部 LIMS/MES 对接 |
| **硬件控制工作站** | `WorkstationBase` | 直接控制 PLC/硬件 |
---
## @device 装饰器(工作站)
工作站也使用 `@device` 装饰器注册,参数与普通设备一致:
```python
@device(
id="my_workstation", # 注册表唯一标识(必填)
category=["workstation"], # 分类标签
description="我的工作站",
)
```
如果一个工作站类支持多个具体变体,可使用 `ids` / `id_meta`,与设备的用法相同(参见 add-device SKILL
---
## 工作站驱动模板
### 模板 A基于外部系统的工作站
```python
import logging
from typing import Dict, Any, Optional
from pylabrobot.resources import Deck
from unilabos.registry.decorators import device, topic_config, not_action
from unilabos.devices.workstation.workstation_base import WorkstationBase
try:
from unilabos.ros.nodes.presets.workstation import ROS2WorkstationNode
except ImportError:
ROS2WorkstationNode = None
@device(id="my_workstation", category=["workstation"], description="我的工作站")
class MyWorkstation(WorkstationBase):
_ros_node: "ROS2WorkstationNode"
def __init__(self, config=None, deck=None, protocol_type=None, **kwargs):
super().__init__(deck=deck, **kwargs)
self.config = config or {}
self.logger = logging.getLogger("MyWorkstation")
self.api_host = self.config.get("api_host", "")
self._status = "Idle"
@not_action
def post_init(self, ros_node: "ROS2WorkstationNode"):
super().post_init(ros_node)
self._ros_node = ros_node
async def scheduler_start(self, **kwargs) -> Dict[str, Any]:
"""注册为工作站动作"""
return {"success": True}
async def create_order(self, json_str: str, **kwargs) -> Dict[str, Any]:
"""注册为工作站动作"""
return {"success": True}
@property
@topic_config()
def workflow_sequence(self) -> str:
return "[]"
@property
@topic_config()
def material_info(self) -> str:
return "{}"
```
### 模板 BProtocol 工作站
直接使用 `ProtocolNode`,通常不需要自定义驱动类:
```python
from unilabos.devices.workstation.workstation_base import ProtocolNode
```
在图文件中配置 `protocol_type` 即可。
---
## 子设备访问sub_devices
工站初始化子设备后,所有子设备实例存储在 `self._ros_node.sub_devices` 字典中key 为设备 idvalue 为 `ROS2DeviceNode` 实例)。工站的驱动类可以直接获取子设备实例来调用其方法:
```python
# 在工站驱动类的方法中访问子设备
sub = self._ros_node.sub_devices["pump_1"]
# .driver_instance — 子设备的驱动实例(即设备 Python 类的实例)
sub.driver_instance.some_method(arg1, arg2)
# .ros_node_instance — 子设备的 ROS2 节点实例
sub.ros_node_instance._action_value_mappings # 查看子设备支持的 action
```
**常见用法**
```python
class MyWorkstation(WorkstationBase):
def my_protocol(self, **kwargs):
# 获取子设备驱动实例
pump = self._ros_node.sub_devices["pump_1"].driver_instance
heater = self._ros_node.sub_devices["heater_1"].driver_instance
# 直接调用子设备方法
pump.aspirate(volume=100)
heater.set_temperature(80)
```
> 参考实现:`unilabos/devices/workstation/bioyond_studio/reaction_station/reaction_station.py` 中通过 `self._ros_node.sub_devices.get(reactor_id)` 获取子反应器实例并更新数据。
---
## 硬件通信接口hardware_interface
硬件控制型工作站通常需要通过串口Serial、Modbus 等通信协议控制多个子设备。Uni-Lab-OS 通过 **通信设备代理** 机制实现端口共享:一个串口只创建一个 `serial` 节点,多个子设备共享这个通信实例。
### 工作原理
`ROS2WorkstationNode` 初始化时分两轮遍历子设备(`workstation.py`
**第一轮 — 初始化所有子设备**:按 `children` 顺序调用 `initialize_device()`,通信设备(`serial_` / `io_` 开头的 id优先完成初始化创建 `serial.Serial()` 实例。其他子设备此时 `self.hardware_interface = "serial_pump"`(字符串)。
**第二轮 — 代理替换**:遍历所有已初始化的子设备,读取子设备的 `_hardware_interface` 配置:
```
hardware_interface = d.ros_node_instance._hardware_interface
# → {"name": "hardware_interface", "read": "send_command", "write": "send_command"}
```
1.`name` 字段对应的属性值:`name_value = getattr(driver, hardware_interface["name"])`
- 如果 `name_value` 是字符串且该字符串是某个子设备的 id → 触发代理替换
2. 从通信设备获取真正的 `read`/`write` 方法
3.`setattr(driver, read_method, _read)` 将通信设备的方法绑定到子设备上
因此:
- **通信设备 id 必须与子设备 config 中填的字符串完全一致**(如 `"serial_pump"`
- **通信设备 id 必须以 `serial_``io_` 开头**(否则第一轮不会被识别为通信设备)
- **通信设备必须在 `children` 列表中排在最前面**,确保先初始化
### HardwareInterface 参数说明
```python
from unilabos.registry.decorators import HardwareInterface
HardwareInterface(
name="hardware_interface", # __init__ 中接收通信实例的属性名
read="send_command", # 通信设备上暴露的读方法名
write="send_command", # 通信设备上暴露的写方法名
extra_info=["list_ports"], # 可选:额外暴露的方法
)
```
**`name` 字段的含义**:对应设备类 `__init__` 中,用于保存通信实例的**属性名**。系统据此知道要替换哪个属性。大部分设备直接用 `"hardware_interface"`,也可以自定义(如 `"io_device_port"`)。
### 示例 1name="hardware_interface"
```python
from unilabos.registry.decorators import device, HardwareInterface
@device(
id="my_pump",
category=["pump_and_valve"],
hardware_interface=HardwareInterface(
name="hardware_interface",
read="send_command",
write="send_command",
),
)
class MyPump:
def __init__(self, port=None, address="1", **kwargs):
# name="hardware_interface" → 系统替换 self.hardware_interface
self.hardware_interface = port # 初始为字符串 "serial_pump",启动后被替换为 Serial 实例
self.address = address
def send_command(self, command: str):
full_command = f"/{self.address}{command}\r\n"
self.hardware_interface.write(bytearray(full_command, "ascii"))
return self.hardware_interface.read_until(b"\n")
```
### 示例 2电磁阀name="io_device_port",自定义属性名)
```python
@device(
id="solenoid_valve",
category=["pump_and_valve"],
hardware_interface=HardwareInterface(
name="io_device_port", # 自定义属性名 → 系统替换 self.io_device_port
read="read_io_coil",
write="write_io_coil",
),
)
class SolenoidValve:
def __init__(self, io_device_port: str = None, **kwargs):
# name="io_device_port" → 图文件 config 中用 "io_device_port": "io_board_1"
self.io_device_port = io_device_port # 初始为字符串,系统替换为 Modbus 实例
```
### Serial 通信设备class="serial"
`serial` 是 Uni-Lab-OS 内置的通信代理设备,代码位于 `unilabos/ros/nodes/presets/serial_node.py`
```python
from serial import Serial, SerialException
from threading import Lock
class ROS2SerialNode(BaseROS2DeviceNode):
def __init__(self, device_id, registry_name, port: str, baudrate: int = 9600, **kwargs):
self.port = port
self.baudrate = baudrate
self._hardware_interface = {
"name": "hardware_interface",
"write": "send_command",
"read": "read_data",
}
self._query_lock = Lock()
self.hardware_interface = Serial(baudrate=baudrate, port=port)
BaseROS2DeviceNode.__init__(
self, driver_instance=self, registry_name=registry_name,
device_id=device_id, status_types={}, action_value_mappings={},
hardware_interface=self._hardware_interface, print_publish=False,
)
self.create_service(SerialCommand, "serialwrite", self.handle_serial_request)
def send_command(self, command: str):
with self._query_lock:
self.hardware_interface.write(bytearray(f"{command}\n", "ascii"))
return self.hardware_interface.read_until(b"\n").decode()
def read_data(self):
with self._query_lock:
return self.hardware_interface.read_until(b"\n").decode()
```
在图文件中使用 `"class": "serial"` 即可创建串口代理:
```json
{
"id": "serial_pump",
"class": "serial",
"parent": "my_station",
"config": { "port": "COM7", "baudrate": 9600 }
}
```
### 图文件配置
**通信设备必须在 `children` 列表中排在最前面**,确保先于其他子设备初始化:
```json
{
"nodes": [
{
"id": "my_station",
"class": "workstation",
"children": ["serial_pump", "pump_1", "pump_2"],
"config": { "protocol_type": ["PumpTransferProtocol"] }
},
{
"id": "serial_pump",
"class": "serial",
"parent": "my_station",
"config": { "port": "COM7", "baudrate": 9600 }
},
{
"id": "pump_1",
"class": "syringe_pump_with_valve.runze.SY03B-T08",
"parent": "my_station",
"config": { "port": "serial_pump", "address": "1", "max_volume": 25.0 }
},
{
"id": "pump_2",
"class": "syringe_pump_with_valve.runze.SY03B-T08",
"parent": "my_station",
"config": { "port": "serial_pump", "address": "2", "max_volume": 25.0 }
}
],
"links": [
{
"source": "pump_1",
"target": "serial_pump",
"type": "communication",
"port": { "pump_1": "port", "serial_pump": "port" }
},
{
"source": "pump_2",
"target": "serial_pump",
"type": "communication",
"port": { "pump_2": "port", "serial_pump": "port" }
}
]
}
```
### 通信协议速查
| 协议 | config 参数 | 依赖包 | 通信设备 class |
| -------------------- | ------------------------------ | ---------- | -------------------------- |
| Serial (RS232/RS485) | `port`, `baudrate` | `pyserial` | `serial` |
| Modbus RTU | `port`, `baudrate`, `slave_id` | `pymodbus` | `device_comms/modbus_plc/` |
| Modbus TCP | `host`, `port`, `slave_id` | `pymodbus` | `device_comms/modbus_plc/` |
| TCP Socket | `host`, `port` | stdlib | 自定义 |
| HTTP API | `url`, `token` | `requests` | `device_comms/rpc.py` |
参考实现:`unilabos/test/experiments/Grignard_flow_batchreact_single_pumpvalve.json`
---
## Deck 与物料生命周期
### 1. Deck 入参与两种初始化模式
系统根据设备节点 `config.deck` 的写法,自动反序列化 Deck 实例后传入 `__init__``deck` 参数。目前 `deck` 是固定字段名,只支持一个主 Deck。建议一个设备拥有一个台面台面上抽象二级、三级子物料。
有两种初始化模式:
#### init 初始化(推荐)
`config.deck` 直接包含 `_resource_type` + `_resource_child_name`,系统先用 Deck 节点的 `config` 调用 Deck 类的 `__init__` 反序列化,再将实例传入设备的 `deck` 参数。子物料随 Deck 的 `children` 一起反序列化。
```json
"config": {
"deck": {
"_resource_type": "unilabos.devices.liquid_handling.prcxi.prcxi:PRCXI9300Deck",
"_resource_child_name": "PRCXI_Deck"
}
}
```
#### deserialize 初始化
`config.deck``data` 包裹一层,系统走 `deserialize` 路径,可传入更多参数(如 `allow_marshal` 等):
```json
"config": {
"deck": {
"data": {
"_resource_child_name": "YB_Bioyond_Deck",
"_resource_type": "unilabos.resources.bioyond.decks:BIOYOND_YB_Deck"
}
}
}
```
没有特殊需求时推荐 init 初始化。
#### config.deck 字段说明
| 字段 | 说明 |
|------|------|
| `_resource_type` | Deck 类的完整模块路径(`module:ClassName` |
| `_resource_child_name` | 对应图文件中 Deck 节点的 `id`,建立父子关联 |
#### 设备 __init__ 接收
```python
def __init__(self, config=None, deck=None, protocol_type=None, **kwargs):
super().__init__(deck=deck, **kwargs)
# deck 已经是反序列化后的 Deck 实例
# → PRCXI9300Deck / BIOYOND_YB_Deck 等
```
#### Deck 节点(图文件中)
Deck 节点作为设备的 `children` 之一,`parent` 指向设备 id
```json
{
"id": "PRCXI_Deck",
"parent": "PRCXI",
"type": "deck",
"class": "",
"children": [],
"config": {
"type": "PRCXI9300Deck",
"size_x": 542, "size_y": 374, "size_z": 0,
"category": "deck",
"sites": [...]
},
"data": {}
}
```
- `config` 中的字段会传入 Deck 类的 `__init__`(因此 `__init__` 必须能接受所有 `serialize()` 输出的字段)
- `children` 初始为空时,由同步器或手动初始化填充
- `config.type` 填 Deck 类名
### 2. Deck 为空时自行初始化
如果 Deck 节点的 `children` 为空,工作站需在 `post_init` 或首次同步时自行初始化内容:
```python
@not_action
def post_init(self, ros_node):
super().post_init(ros_node)
if self.deck and not self.deck.children:
self._initialize_default_deck()
def _initialize_default_deck(self):
from my_labware import My_TipRack, My_Plate
self.deck.assign_child_resource(My_TipRack("T1"), spot=0)
self.deck.assign_child_resource(My_Plate("T2"), spot=1)
```
### 3. 物料双向同步
当工作站对接外部系统LIMS/MES需要实现 `ResourceSynchronizer` 处理双向物料同步:
```python
from unilabos.devices.workstation.workstation_base import ResourceSynchronizer
class MyResourceSynchronizer(ResourceSynchronizer):
def sync_from_external(self) -> bool:
"""从外部系统同步到 self.workstation.deck"""
external_data = self._query_external_materials()
# 以外部工站为准:根据外部数据反向创建 PLR 资源实例
for item in external_data:
cls = self._resolve_resource_class(item["type"])
resource = cls(name=item["name"], **item["params"])
self.workstation.deck.assign_child_resource(resource, spot=item["slot"])
return True
def sync_to_external(self, resource) -> bool:
"""将 UniLab 侧物料变更同步到外部系统"""
# 以 UniLab 为准:将 PLR 资源转为外部格式并推送
external_format = self._convert_to_external(resource)
return self._push_to_external(external_format)
def handle_external_change(self, change_info) -> bool:
"""处理外部系统主动推送的变更"""
return True
```
同步策略取决于业务场景:
- **以外部工站为准**:从外部 API 查询物料数据,反向创建对应的 PLR 资源实例放到 Deck 上
- **以 UniLab 为准**UniLab 侧的物料变更通过 `sync_to_external` 推送到外部系统
在工作站 `post_init` 中初始化同步器:
```python
@not_action
def post_init(self, ros_node):
super().post_init(ros_node)
self.resource_synchronizer = MyResourceSynchronizer(self)
self.resource_synchronizer.sync_from_external()
```
### 4. 序列化与持久化serialize / serialize_state
资源类需正确实现序列化,系统据此完成持久化和前端同步。
**`serialize()`** — 输出资源的结构信息(`config` 层),反序列化时作为 `__init__` 的入参回传。因此 **`__init__` 必须通过 `**kwargs`接受`serialize()` 输出的所有字段\*\*,即使当前不使用:
```python
class MyDeck(Deck):
def __init__(self, name, size_x, size_y, size_z,
sites=None, # serialize() 输出的字段
rotation=None, # serialize() 输出的字段
barcode=None, # serialize() 输出的字段
**kwargs): # 兜底:接受所有未知的 serialize 字段
super().__init__(size_x, size_y, size_z, name)
# ...
def serialize(self) -> dict:
data = super().serialize()
data["sites"] = [...] # 自定义字段
return data
```
**`serialize_state()`** — 输出资源的运行时状态(`data` 层),用于持久化可变信息。`data` 中的内容会被正确保存和恢复:
```python
class MyPlate(Plate):
def __init__(self, name, size_x, size_y, size_z,
material_info=None, **kwargs):
super().__init__(name, size_x, size_y, size_z, **kwargs)
self._unilabos_state = {}
if material_info:
self._unilabos_state["Material"] = material_info
def serialize_state(self) -> Dict[str, Any]:
data = super().serialize_state()
data.update(self._unilabos_state)
return data
```
关键要点:
- `serialize()` 输出的所有字段都会作为 `config` 回传到 `__init__`,所以 `__init__` 必须能接受它们(显式声明或 `**kwargs`
- `serialize_state()` 输出的 `data` 用于持久化运行时状态(如物料信息、液体量等)
- `_unilabos_state` 中只存可 JSON 序列化的基本类型str, int, float, bool, list, dict, None
### 5. 子物料自动同步
子物料Bottle、Plate、TipRack 等)放到 Deck 上后,系统会自动将其同步到前端的 Deck 视图。只需保证资源类正确实现了 `serialize()` / `serialize_state()` 和反序列化即可。
### 6. 图文件配置(参考 prcxi_9320_slim.json
```json
{
"nodes": [
{
"id": "my_station",
"type": "device",
"class": "my_workstation",
"config": {
"deck": {
"_resource_type": "unilabos.resources.my_module:MyDeck",
"_resource_child_name": "my_deck"
},
"host": "10.20.30.1",
"port": 9999
}
},
{
"id": "my_deck",
"parent": "my_station",
"type": "deck",
"class": "",
"children": [],
"config": {
"type": "MyLabDeck",
"size_x": 542,
"size_y": 374,
"size_z": 0,
"category": "deck",
"sites": [
{
"label": "T1",
"visible": true,
"occupied_by": null,
"position": { "x": 0, "y": 0, "z": 0 },
"size": { "width": 128.0, "height": 86, "depth": 0 },
"content_type": ["plate", "tip_rack", "tube_rack", "adaptor"]
}
]
},
"data": {}
}
],
"edges": []
}
```
Deck 节点要点:
- `config.type` 填 Deck 类名(如 `"PRCXI9300Deck"`
- `config.sites` 完整列出所有 site从 Deck 类的 `serialize()` 输出获取)
- `children` 初始为空(由同步器或手动初始化填充)
- 设备节点 `config.deck._resource_type` 指向 Deck 类的完整模块路径
---
## 子设备
子设备按标准设备接入流程创建(参见 add-device SKILL使用 `@device` 装饰器。
子设备约束:
- 图文件中 `parent` 指向工作站 ID
- 在工作站 `children` 数组中列出
---
## 关键规则
1. **`__init__` 必须接受 `deck``**kwargs`** — `WorkstationBase.**init**`需要`deck` 参数
2. **Deck 通过 `config.deck._resource_type` 反序列化传入** — 不要在 `__init__` 中手动创建 Deck
3. **Deck 为空时自行初始化内容** — 在 `post_init` 中检查并填充默认物料
4. **外部同步实现 `ResourceSynchronizer`**`sync_from_external` / `sync_to_external`
5. **通过 `self._children` 访问子设备** — 不要自行维护子设备引用
6. **`post_init` 中启动后台服务** — 不要在 `__init__` 中启动网络连接
7. **异步方法使用 `await self._ros_node.sleep()`** — 禁止 `time.sleep()``asyncio.sleep()`
8. **使用 `@not_action` 标记非动作方法**`post_init`, `initialize`, `cleanup`
9. **子物料保证正确 serialize/deserialize** — 系统自动同步到前端 Deck 视图
---
## 验证
```bash
# 模块可导入
python -c "from unilabos.devices.workstation.<name>.<name> import <ClassName>"
# 启动测试AST 自动扫描)
unilab -g <graph>.json
```
---
## 现有工作站参考
| 工作站 | 驱动类 | 类型 |
| -------------- | ----------------------------- | -------- |
| Protocol 通用 | `ProtocolNode` | Protocol |
| Bioyond 反应站 | `BioyondReactionStation` | 外部系统 |
| 纽扣电池组装 | `CoinCellAssemblyWorkstation` | 硬件控制 |
参考路径:`unilabos/devices/workstation/` 目录下各工作站实现。

View File

@@ -1,371 +0,0 @@
# 工作站高级模式参考
本文件是 SKILL.md 的补充,包含外部系统集成、物料同步、配置结构等高级模式。
Agent 在需要实现这些功能时按需阅读。
---
## 1. 外部系统集成模式
### 1.1 RPC 客户端
与外部 LIMS/MES 系统通信的标准模式。继承 `BaseRequest`,所有接口统一用 POST。
```python
from unilabos.device_comms.rpc import BaseRequest
class MySystemRPC(BaseRequest):
"""外部系统 RPC 客户端"""
def __init__(self, host: str, api_key: str):
super().__init__(host)
self.api_key = api_key
def _request(self, endpoint: str, data: dict = None) -> dict:
return self.post(
url=f"{self.host}/api/{endpoint}",
params={
"apiKey": self.api_key,
"requestTime": self.get_current_time_iso8601(),
"data": data or {},
},
)
def query_status(self) -> dict:
return self._request("status/query")
def create_order(self, order_data: dict) -> dict:
return self._request("order/create", order_data)
```
参考:`unilabos/devices/workstation/bioyond_studio/bioyond_rpc.py``BioyondV1RPC`
### 1.2 HTTP 回调服务
接收外部系统报送的标准模式。使用 `WorkstationHTTPService`,在 `post_init` 中启动。
```python
from unilabos.devices.workstation.workstation_http_service import WorkstationHTTPService
class MyWorkstation(WorkstationBase):
def __init__(self, config=None, deck=None, **kwargs):
super().__init__(deck=deck, **kwargs)
self.config = config or {}
http_cfg = self.config.get("http_service_config", {})
self._http_service_config = {
"host": http_cfg.get("http_service_host", "127.0.0.1"),
"port": http_cfg.get("http_service_port", 8080),
}
self.http_service = None
def post_init(self, ros_node):
super().post_init(ros_node)
self.http_service = WorkstationHTTPService(
workstation_instance=self,
host=self._http_service_config["host"],
port=self._http_service_config["port"],
)
self.http_service.start()
```
**HTTP 服务路由**(固定端点,由 `WorkstationHTTPHandler` 自动分发):
| 端点 | 调用的工作站方法 |
|------|-----------------|
| `/report/step_finish` | `process_step_finish_report(report_request)` |
| `/report/sample_finish` | `process_sample_finish_report(report_request)` |
| `/report/order_finish` | `process_order_finish_report(report_request, used_materials)` |
| `/report/material_change` | `process_material_change_report(report_data)` |
| `/report/error_handling` | `handle_external_error(error_data)` |
实现对应方法即可接收回调:
```python
def process_step_finish_report(self, report_request) -> Dict[str, Any]:
"""处理步骤完成报告"""
step_name = report_request.data.get("stepName")
return {"success": True, "message": f"步骤 {step_name} 已处理"}
def process_order_finish_report(self, report_request, used_materials) -> Dict[str, Any]:
"""处理订单完成报告"""
order_code = report_request.data.get("orderCode")
return {"success": True}
```
参考:`unilabos/devices/workstation/workstation_http_service.py`
### 1.3 连接监控
独立线程周期性检测外部系统连接状态,状态变化时发布 ROS 事件。
```python
class ConnectionMonitor:
def __init__(self, workstation, check_interval=30):
self.workstation = workstation
self.check_interval = check_interval
self._running = False
self._thread = None
def start(self):
self._running = True
self._thread = threading.Thread(target=self._monitor_loop, daemon=True)
self._thread.start()
def _monitor_loop(self):
while self._running:
try:
# 调用外部系统接口检测连接
self.workstation.hardware_interface.ping()
status = "online"
except Exception:
status = "offline"
time.sleep(self.check_interval)
```
参考:`unilabos/devices/workstation/bioyond_studio/station.py``ConnectionMonitor`
---
## 2. Config 结构模式
工作站的 `config` 在图文件中定义,传入 `__init__`。以下是常见字段模式:
### 2.1 外部系统连接
```json
{
"api_host": "http://192.168.1.100:8080",
"api_key": "YOUR_API_KEY"
}
```
### 2.2 HTTP 回调服务
```json
{
"http_service_config": {
"http_service_host": "127.0.0.1",
"http_service_port": 8080
}
}
```
### 2.3 物料类型映射
将 PLR 资源类名映射到外部系统的物料类型(名称 + UUID。用于双向物料转换。
```json
{
"material_type_mappings": {
"PLR_ResourceClassName": ["外部系统显示名", "external-type-uuid"],
"BIOYOND_PolymerStation_Reactor": ["反应器", "3a14233b-902d-0d7b-..."]
}
}
```
### 2.4 仓库映射
将仓库名映射到外部系统的仓库 UUID 和库位 UUID。用于入库/出库操作。
```json
{
"warehouse_mapping": {
"仓库名": {
"uuid": "warehouse-uuid",
"site_uuids": {
"A01": "site-uuid-A01",
"A02": "site-uuid-A02"
}
}
}
}
```
### 2.5 工作流映射
将内部工作流名映射到外部系统的工作流 ID。
```json
{
"workflow_mappings": {
"internal_workflow_name": "external-workflow-uuid"
}
}
```
### 2.6 物料默认参数
```json
{
"material_default_parameters": {
"NMP": {
"unit": "毫升",
"density": "1.03",
"densityUnit": "g/mL",
"description": "N-甲基吡咯烷酮"
}
}
}
```
---
## 3. 资源同步机制
### 3.1 ResourceSynchronizer
抽象基类,用于与外部物料系统双向同步。定义在 `workstation_base.py`
```python
from unilabos.devices.workstation.workstation_base import ResourceSynchronizer
class MyResourceSynchronizer(ResourceSynchronizer):
def __init__(self, workstation, api_client):
super().__init__(workstation)
self.api_client = api_client
def sync_from_external(self) -> bool:
"""从外部系统拉取物料到 deck"""
external_materials = self.api_client.list_materials()
for material in external_materials:
plr_resource = self._convert_to_plr(material)
self.workstation.deck.assign_child_resource(plr_resource, coordinate)
return True
def sync_to_external(self, plr_resource) -> bool:
"""将 deck 中的物料变更推送到外部系统"""
external_data = self._convert_from_plr(plr_resource)
self.api_client.update_material(external_data)
return True
def handle_external_change(self, change_info) -> bool:
"""处理外部系统推送的物料变更"""
return True
```
### 3.2 update_resource — 上传资源树到云端
将 PLR Deck 序列化后通过 ROS 服务上传。典型使用场景:
```python
# 在 post_init 中上传初始 deck
from unilabos.ros.nodes.base_device_node import ROS2DeviceNode
ROS2DeviceNode.run_async_func(
self._ros_node.update_resource, True,
**{"resources": [self.deck]}
)
# 在动作方法中更新特定资源
ROS2DeviceNode.run_async_func(
self._ros_node.update_resource, True,
**{"resources": [updated_plate]}
)
```
---
## 4. 工作流序列管理
工作站通过 `workflow_sequence` 属性管理任务队列JSON 字符串形式)。
```python
class MyWorkstation(WorkstationBase):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._workflow_sequence = []
@property
def workflow_sequence(self) -> str:
"""返回 JSON 字符串ROS 自动发布"""
import json
return json.dumps(self._workflow_sequence)
async def append_to_workflow_sequence(self, workflow_name: str) -> Dict[str, Any]:
"""添加工作流到队列"""
self._workflow_sequence.append({
"name": workflow_name,
"status": "pending",
"created_at": time.time(),
})
return {"success": True}
async def clear_workflows(self) -> Dict[str, Any]:
"""清空工作流队列"""
self._workflow_sequence = []
return {"success": True}
```
---
## 5. 站间物料转移
工作站之间转移物料的模式。通过 ROS ActionClient 调用目标站的动作。
```python
async def transfer_materials_to_another_station(
self,
target_device_id: str,
transfer_groups: list,
**kwargs,
) -> Dict[str, Any]:
"""将物料转移到另一个工作站"""
target_node = self._children.get(target_device_id)
if not target_node:
# 通过 ROS 节点查找非子设备的目标站
pass
for group in transfer_groups:
resource = self.find_resource_by_name(group["resource_name"])
# 从本站 deck 移除
resource.unassign()
# 调用目标站的接收方法
# ...
return {"success": True, "transferred": len(transfer_groups)}
```
参考:`BioyondDispensingStation.transfer_materials_to_reaction_station`
---
## 6. post_init 完整模式
`post_init` 是工作站初始化的关键阶段,此时 ROS 节点和子设备已就绪。
```python
def post_init(self, ros_node):
super().post_init(ros_node)
# 1. 初始化外部系统客户端(此时 config 已可用)
self.rpc_client = MySystemRPC(
host=self.config.get("api_host"),
api_key=self.config.get("api_key"),
)
self.hardware_interface = self.rpc_client
# 2. 启动连接监控
self.connection_monitor = ConnectionMonitor(self)
self.connection_monitor.start()
# 3. 启动 HTTP 回调服务
if hasattr(self, '_http_service_config'):
self.http_service = WorkstationHTTPService(
workstation_instance=self,
host=self._http_service_config["host"],
port=self._http_service_config["port"],
)
self.http_service.start()
# 4. 上传 deck 到云端
ROS2DeviceNode.run_async_func(
self._ros_node.update_resource, True,
**{"resources": [self.deck]}
)
# 5. 初始化资源同步器(可选)
self.resource_synchronizer = MyResourceSynchronizer(self, self.rpc_client)
```

View File

@@ -1,233 +0,0 @@
---
name: batch-insert-reagent
description: Batch insert reagents into Uni-Lab platform — add chemicals with CAS, SMILES, supplier info. Use when the user wants to add reagents, insert chemicals, batch register reagents, or mentions 录入试剂/添加试剂/试剂入库/reagent.
---
# 批量录入试剂 Skill
通过云端 API 批量录入试剂信息,支持逐条或批量操作。
## 前置条件(缺一不可)
使用本 skill 前,**必须**先确认以下信息。如果缺少任何一项,**立即向用户询问并终止**,等补齐后再继续。
### 1. ak / sk → AUTH
询问用户的启动参数,从 `--ak` `--sk` 或 config.py 中获取。
生成 AUTH token任选一种方式
```bash
# 方式一Python 一行生成
python -c "import base64,sys; print('Authorization: Lab ' + base64.b64encode(f'{sys.argv[1]}:{sys.argv[2]}'.encode()).decode())" <ak> <sk>
# 方式二:手动计算
# base64(ak:sk) → Authorization: Lab <token>
```
### 2. --addr → BASE URL
| `--addr` 值 | BASE |
|-------------|------|
| `test` | `https://uni-lab.test.bohrium.com` |
| `uat` | `https://uni-lab.uat.bohrium.com` |
| `local` | `http://127.0.0.1:48197` |
| 不传(默认) | `https://uni-lab.bohrium.com` |
确认后设置:
```bash
BASE="<根据 addr 确定的 URL>"
AUTH="Authorization: Lab <gen_auth.py 输出的 token>"
```
**两项全部就绪后才可发起 API 请求。**
## Session State
- `lab_uuid` — 实验室 UUID首次通过 API #1 自动获取,**不需要问用户**
## 请求约定
所有请求使用 `curl -s`POST 需加 `Content-Type: application/json`
> **Windows 平台**必须使用 `curl.exe`(而非 PowerShell 的 `curl` 别名),示例中的 `curl` 均指 `curl.exe`。
---
## API Endpoints
### 1. 获取实验室信息(自动获取 lab_uuid
```bash
curl -s -X GET "$BASE/api/v1/edge/lab/info" -H "$AUTH"
```
返回:
```json
{"code": 0, "data": {"uuid": "xxx", "name": "实验室名称"}}
```
记住 `data.uuid``lab_uuid`
### 2. 录入试剂
```bash
curl -s -X POST "$BASE/api/v1/lab/reagent" \
-H "$AUTH" -H "Content-Type: application/json" \
-d '{
"lab_uuid": "<lab_uuid>",
"cas": "<CAS号>",
"name": "<试剂名称>",
"molecular_formula": "<分子式>",
"smiles": "<SMILES>",
"stock_in_quantity": <入库数量>,
"unit": "<单位字符串>",
"supplier": "<供应商>",
"production_date": "<生产日期 ISO 8601>",
"expiry_date": "<过期日期 ISO 8601>"
}'
```
返回成功时包含试剂 UUID
```json
{"code": 0, "data": {"uuid": "xxx", ...}}
```
---
## 试剂字段说明
| 字段 | 类型 | 必填 | 说明 | 示例 |
|------|------|------|------|------|
| `lab_uuid` | string | 是 | 实验室 UUID从 API #1 获取) | `"8511c672-..."` |
| `cas` | string | 是 | CAS 注册号 | `"7732-18-3"` |
| `name` | string | 是 | 试剂中文/英文名称 | `"水"` |
| `molecular_formula` | string | 是 | 分子式 | `"H2O"` |
| `smiles` | string | 是 | SMILES 表示 | `"O"` |
| `stock_in_quantity` | number | 是 | 入库数量 | `10` |
| `unit` | string | 是 | 单位(字符串,见下表) | `"mL"` |
| `supplier` | string | 否 | 供应商名称 | `"国药集团"` |
| `production_date` | string | 否 | 生产日期ISO 8601 | `"2025-11-18T00:00:00Z"` |
| `expiry_date` | string | 否 | 过期日期ISO 8601 | `"2026-11-18T00:00:00Z"` |
### unit 单位值
| 值 | 单位 |
|------|------|
| `"mL"` | 毫升 |
| `"L"` | 升 |
| `"g"` | 克 |
| `"kg"` | 千克 |
| `"瓶"` | 瓶 |
> 根据试剂状态选择:液体用 `"mL"` / `"L"`,固体用 `"g"` / `"kg"`。
---
## 批量录入策略
### 方式一:用户提供 JSON 数组
用户一次性给出多条试剂数据:
```json
[
{"cas": "7732-18-3", "name": "水", "molecular_formula": "H2O", "smiles": "O", "stock_in_quantity": 10, "unit": "mL"},
{"cas": "64-17-5", "name": "乙醇", "molecular_formula": "C2H6O", "smiles": "CCO", "stock_in_quantity": 5, "unit": "L"}
]
```
Agent 自动为每条补充 `lab_uuid``production_date``expiry_date` 等字段后逐条提交。
Agent 循环调用 API #2 逐条录入,每条记录一次 API 调用。
### 方式二:用户逐个描述
用户口头描述试剂(如「帮我录入 500mL 的无水乙醇Sigma 的」agent 自行补全字段:
1. 根据名称查找 CAS 号、分子式、SMILES参考下方速查表或自行推断
2. 构建完整的请求体
3. 向用户确认后提交
### 方式三:从 CSV/表格批量导入
用户提供 CSV 或表格文件路径agent 读取并解析:
```bash
# 期望的 CSV 格式(首行为表头)
cas,name,molecular_formula,smiles,stock_in_quantity,unit,supplier,production_date,expiry_date
7732-18-3,水,H2O,O,10,mL,农夫山泉,2025-11-18T00:00:00Z,2026-11-18T00:00:00Z
```
### 执行与汇报
每次 API 调用后:
1. 检查返回 `code`0 = 成功)
2. 记录成功/失败数量
3. 全部完成后汇总:「共录入 N 条试剂,成功 X 条,失败 Y 条」
4. 如有失败,列出失败的试剂名称和错误信息
---
## 常见试剂速查表
| 名称 | CAS | 分子式 | SMILES |
|------|-----|--------|--------|
| 水 | 7732-18-3 | H2O | O |
| 乙醇 | 64-17-5 | C2H6O | CCO |
| 甲醇 | 67-56-1 | CH4O | CO |
| 丙酮 | 67-64-1 | C3H6O | CC(C)=O |
| 二甲基亚砜(DMSO) | 67-68-5 | C2H6OS | CS(C)=O |
| 乙酸乙酯 | 141-78-6 | C4H8O2 | CCOC(C)=O |
| 二氯甲烷 | 75-09-2 | CH2Cl2 | ClCCl |
| 四氢呋喃(THF) | 109-99-9 | C4H8O | C1CCOC1 |
| N,N-二甲基甲酰胺(DMF) | 68-12-2 | C3H7NO | CN(C)C=O |
| 氯仿 | 67-66-3 | CHCl3 | ClC(Cl)Cl |
| 乙腈 | 75-05-8 | C2H3N | CC#N |
| 甲苯 | 108-88-3 | C7H8 | Cc1ccccc1 |
| 正己烷 | 110-54-3 | C6H14 | CCCCCC |
| 异丙醇 | 67-63-0 | C3H8O | CC(C)O |
| 盐酸 | 7647-01-0 | HCl | Cl |
| 硫酸 | 7664-93-9 | H2SO4 | OS(O)(=O)=O |
| 氢氧化钠 | 1310-73-2 | NaOH | [Na]O |
| 碳酸钠 | 497-19-8 | Na2CO3 | [Na]OC([O-])=O.[Na+] |
| 氯化钠 | 7647-14-5 | NaCl | [Na]Cl |
| 乙二胺四乙酸(EDTA) | 60-00-4 | C10H16N2O8 | OC(=O)CN(CCN(CC(O)=O)CC(O)=O)CC(O)=O |
> 此表仅供快速参考。对于不在表中的试剂agent 应根据化学知识推断或提示用户补充。
---
## 完整工作流 Checklist
```
Task Progress:
- [ ] Step 1: 确认 ak/sk → 生成 AUTH token
- [ ] Step 2: 确认 --addr → 设置 BASE URL
- [ ] Step 3: GET /edge/lab/info → 获取 lab_uuid
- [ ] Step 4: 收集试剂信息(用户提供列表/逐个描述/CSV文件
- [ ] Step 5: 补全缺失字段CAS、分子式、SMILES 等)
- [ ] Step 6: 向用户确认待录入的试剂列表
- [ ] Step 7: 循环调用 POST /lab/reagent 逐条录入(每条需含 lab_uuid
- [ ] Step 8: 汇总结果(成功/失败数量及详情)
```
---
## 完整示例
用户说:「帮我录入 3 种试剂500mL 无水乙醇、1kg 氯化钠、2L 去离子水」
Agent 构建的请求序列:
```json
// 第 1 条
{"lab_uuid": "8511c672-...", "cas": "64-17-5", "name": "无水乙醇", "molecular_formula": "C2H6O", "smiles": "CCO", "stock_in_quantity": 500, "unit": "mL", "supplier": "国药集团", "production_date": "2025-01-01T00:00:00Z", "expiry_date": "2026-01-01T00:00:00Z"}
// 第 2 条
{"lab_uuid": "8511c672-...", "cas": "7647-14-5", "name": "氯化钠", "molecular_formula": "NaCl", "smiles": "[Na]Cl", "stock_in_quantity": 1, "unit": "kg", "supplier": "", "production_date": "2025-01-01T00:00:00Z", "expiry_date": "2026-01-01T00:00:00Z"}
// 第 3 条
{"lab_uuid": "8511c672-...", "cas": "7732-18-3", "name": "去离子水", "molecular_formula": "H2O", "smiles": "O", "stock_in_quantity": 2, "unit": "L", "supplier": "", "production_date": "2025-01-01T00:00:00Z", "expiry_date": "2026-01-01T00:00:00Z"}
```

View File

@@ -1,301 +0,0 @@
---
name: batch-submit-experiment
description: Batch submit experiments (notebooks) to Uni-Lab platform — list workflows, generate node_params from registry schemas, submit multiple rounds. Use when the user wants to submit experiments, create notebooks, batch run workflows, or mentions 提交实验/批量实验/notebook/实验轮次.
---
# 批量提交实验指南
通过云端 API 批量提交实验notebook支持多轮实验参数配置。根据 workflow 模板详情和本地设备注册表自动生成 `node_params` 模板。
## 前置条件(缺一不可)
使用本指南前,**必须**先确认以下信息。如果缺少任何一项,**立即向用户询问并终止**,等补齐后再继续。
### 1. ak / sk → AUTH
询问用户的启动参数,从 `--ak` `--sk` 或 config.py 中获取。
生成 AUTH token任选一种方式
```bash
# 方式一Python 一行生成
python -c "import base64,sys; print('Authorization: Lab ' + base64.b64encode(f'{sys.argv[1]}:{sys.argv[2]}'.encode()).decode())" <ak> <sk>
# 方式二:手动计算
# base64(ak:sk) → Authorization: Lab <token>
```
### 2. --addr → BASE URL
| `--addr` 值 | BASE |
|-------------|------|
| `test` | `https://uni-lab.test.bohrium.com` |
| `uat` | `https://uni-lab.uat.bohrium.com` |
| `local` | `http://127.0.0.1:48197` |
| 不传(默认) | `https://uni-lab.bohrium.com` |
确认后设置:
```bash
BASE="<根据 addr 确定的 URL>"
AUTH="Authorization: Lab <上面命令输出的 token>"
```
### 3. req_device_registry_upload.json设备注册表
**批量提交实验时需要本地注册表来解析 workflow 节点的参数 schema。**
按优先级搜索:
```
<workspace 根目录>/unilabos_data/req_device_registry_upload.json
<workspace 根目录>/req_device_registry_upload.json
```
也可直接 Glob 搜索:`**/req_device_registry_upload.json`
找到后**检查文件修改时间**并告知用户。超过 1 天提醒用户是否需要重新启动 `unilab`
**如果文件不存在** → 告知用户先运行 `unilab` 启动命令,等注册表生成后再执行。可跳过此步,但将无法自动生成参数模板,需要用户手动填写 `param`
### 4. workflow_uuid目标工作流
用户需要提供要提交的 workflow UUID。如果用户不确定通过 API #2 列出可用 workflow 供选择。
**四项全部就绪后才可开始。**
## Session State
在整个对话过程中agent 需要记住以下状态,避免重复询问用户:
- `lab_uuid` — 实验室 UUID首次通过 API #1 自动获取,**不需要问用户**
- `workflow_uuid` — 工作流 UUID用户提供或从列表选择
- `workflow_nodes` — workflow 中各 action 节点的 uuid、设备 ID、动作名从 API #3 获取)
## 请求约定
所有请求使用 `curl -s`POST 需加 `Content-Type: application/json`
> **Windows 平台**必须使用 `curl.exe`(而非 PowerShell 的 `curl` 别名),示例中的 `curl` 均指 `curl.exe`。
>
> **PowerShell JSON 传参**PowerShell 中 `-d '{"key":"value"}'` 会因引号转义失败。请将 JSON 写入临时文件,用 `-d '@tmp_body.json'`(单引号包裹 `@`,否则会被解析为 splatting 运算符)。
---
## API Endpoints
### 1. 获取实验室信息(自动获取 lab_uuid
```bash
curl -s -X GET "$BASE/api/v1/edge/lab/info" -H "$AUTH"
```
返回:
```json
{"code": 0, "data": {"uuid": "xxx", "name": "实验室名称"}}
```
记住 `data.uuid``lab_uuid`
### 2. 列出可用 workflow
```bash
curl -s -X GET "$BASE/api/v1/lab/workflow/workflows?page=1&page_size=20&lab_uuid=$lab_uuid" -H "$AUTH"
```
返回 workflow 列表,展示给用户选择。列出每个 workflow 的 `uuid``name`
### 3. 获取 workflow 模板详情
```bash
curl -s -X GET "$BASE/api/v1/lab/workflow/template/detail/$workflow_uuid" -H "$AUTH"
```
返回 workflow 的完整结构,包含所有 action 节点信息。需要从响应中提取:
- 每个 action 节点的 `node_uuid`
- 每个节点对应的设备 ID`resource_template_name`
- 每个节点的动作名(`node_template_name`
- 每个节点的现有参数(`param`
> **注意**:此 API 返回格式可能因版本不同而有差异。首次调用时,先打印完整响应分析结构,再提取节点信息。常见的节点字段路径为 `data.nodes[]` 或 `data.workflow_nodes[]`。
### 4. 提交实验(创建 notebook
```bash
curl -s -X POST "$BASE/api/v1/lab/notebook" \
-H "$AUTH" -H "Content-Type: application/json" \
-d '<request_body>'
```
请求体结构:
```json
{
"lab_uuid": "<lab_uuid>",
"workflow_uuid": "<workflow_uuid>",
"name": "<实验名称>",
"node_params": [
{
"sample_uuids": ["<样品UUID1>", "<样品UUID2>"],
"datas": [
{
"node_uuid": "<workflow中的节点UUID>",
"param": {},
"sample_params": [
{
"container_uuid": "<容器UUID>",
"sample_value": {
"liquid_names": "<液体名称>",
"volumes": 1000
}
}
]
}
]
}
]
}
```
> **注意**`sample_uuids` 必须是 **UUID 数组**`[]uuid.UUID`),不是字符串。无样品时传空数组 `[]`。
---
## Notebook 请求体详解
### node_params 结构
`node_params` 是一个数组,**每个元素代表一轮实验**
- 要跑 2 轮 → `node_params` 有 2 个元素
- 要跑 N 轮 → `node_params` 有 N 个元素
### 每轮的字段
| 字段 | 类型 | 说明 |
|------|------|------|
| `sample_uuids` | array\<uuid\> | 该轮实验的样品 UUID 数组,无样品时传 `[]` |
| `datas` | array | 该轮中每个 workflow 节点的参数配置 |
### datas 中每个节点
| 字段 | 类型 | 说明 |
|------|------|------|
| `node_uuid` | string | workflow 模板中的节点 UUID从 API #3 获取) |
| `param` | object | 动作参数(根据本地注册表 schema 填写) |
| `sample_params` | array | 样品相关参数(液体名、体积等) |
### sample_params 中每条
| 字段 | 类型 | 说明 |
|------|------|------|
| `container_uuid` | string | 容器 UUID |
| `sample_value` | object | 样品值,如 `{"liquid_names": "水", "volumes": 1000}` |
---
## 从本地注册表生成 param 模板
### 自动方式 — 运行脚本
```bash
python scripts/gen_notebook_params.py \
--auth <token> \
--base <BASE_URL> \
--workflow-uuid <workflow_uuid> \
[--registry <path/to/req_device_registry_upload.json>] \
[--rounds <轮次数>] \
[--output <输出文件路径>]
```
> 脚本位于本文档同级目录下的 `scripts/gen_notebook_params.py`。
脚本会:
1. 调用 workflow detail API 获取所有 action 节点
2. 读取本地注册表,为每个节点查找对应的 action schema
3. 生成 `notebook_template.json`,包含:
- 完整 `node_params` 骨架
- 每个节点的 param 字段及类型说明
- `_schema_info` 辅助信息(不提交,仅供参考)
### 手动方式
如果脚本不可用或注册表不存在:
1. 调用 API #3 获取 workflow 详情
2. 找到每个 action 节点的 `node_uuid`
3. 在本地注册表中查找对应设备的 `action_value_mappings`
```
resources[].id == <device_id>
→ resources[].class.action_value_mappings.<action_name>.schema.properties.goal.properties
```
4. 将 schema 中的 properties 作为 `param` 的字段模板
5. 按轮次复制 `node_params` 元素,让用户填写每轮的具体值
### 注册表结构参考
```json
{
"resources": [
{
"id": "liquid_handler.prcxi",
"class": {
"module": "unilabos.devices.xxx:ClassName",
"action_value_mappings": {
"transfer_liquid": {
"type": "LiquidHandlerTransfer",
"schema": {
"properties": {
"goal": {
"properties": {
"asp_vols": {"type": "array", "items": {"type": "number"}},
"sources": {"type": "array"}
},
"required": ["asp_vols", "sources"]
}
}
},
"goal_default": {}
}
}
}
}
]
}
```
`param` 填写时,使用 `goal.properties` 中的字段名和类型。
---
## 完整工作流 Checklist
```
Task Progress:
- [ ] Step 1: 确认 ak/sk → 生成 AUTH token
- [ ] Step 2: 确认 --addr → 设置 BASE URL
- [ ] Step 3: GET /edge/lab/info → 获取 lab_uuid
- [ ] Step 4: 确认 workflow_uuid用户提供或从 GET #2 列表选择)
- [ ] Step 5: GET workflow detail (#3) → 提取各节点 uuid、设备ID、动作名
- [ ] Step 6: 定位本地注册表 req_device_registry_upload.json
- [ ] Step 7: 运行 gen_notebook_params.py 或手动匹配 → 生成 node_params 模板
- [ ] Step 8: 引导用户填写每轮的参数sample_uuids、param、sample_params
- [ ] Step 9: 构建完整请求体 → POST /lab/notebook 提交
- [ ] Step 10: 检查返回结果,确认提交成功
```
---
## 常见问题
### Q: workflow 中有多个节点,每轮都要填所有节点的参数吗?
是的。`datas` 数组中需要包含该轮实验涉及的每个 workflow 节点的参数。通常每个 action 节点都需要一条 `datas` 记录。
### Q: 多轮实验的参数完全不同吗?
通常每轮的 `param`(设备动作参数)可能相同或相似,但 `sample_uuids` 和 `sample_params`(样品信息)每轮不同。脚本生成模板时会按轮次复制骨架,用户只需修改差异部分。
### Q: 如何获取 sample_uuids 和 container_uuid
这些 UUID 通常来自实验室的样品管理系统。向用户询问或从资源树API `GET /lab/material/download/$lab_uuid`)中查找。

View File

@@ -1,394 +0,0 @@
#!/usr/bin/env python3
"""
从 workflow 模板详情 + 本地设备注册表生成 notebook 提交用的 node_params 模板。
用法:
python gen_notebook_params.py --auth <token> --base <url> --workflow-uuid <uuid> [选项]
选项:
--auth <token> Lab tokenbase64(ak:sk) 的结果,不含 "Lab " 前缀)
--base <url> API 基础 URL如 https://uni-lab.test.bohrium.com
--workflow-uuid <uuid> 目标 workflow 的 UUID
--registry <path> 本地注册表文件路径(默认自动搜索)
--rounds <n> 实验轮次数(默认 1
--output <path> 输出模板文件路径(默认 notebook_template.json
--dump-response 打印 workflow detail API 的原始响应(调试用)
示例:
python gen_notebook_params.py \\
--auth YTFmZDlkNGUtxxxx \\
--base https://uni-lab.test.bohrium.com \\
--workflow-uuid abc-123-def \\
--rounds 2
"""
import copy
import json
import os
import sys
from datetime import datetime
from urllib.request import Request, urlopen
from urllib.error import HTTPError, URLError
REGISTRY_FILENAME = "req_device_registry_upload.json"
def find_registry(explicit_path=None):
"""查找本地注册表文件,逻辑同 extract_device_actions.py"""
if explicit_path:
if os.path.isfile(explicit_path):
return explicit_path
if os.path.isdir(explicit_path):
fp = os.path.join(explicit_path, REGISTRY_FILENAME)
if os.path.isfile(fp):
return fp
print(f"警告: 指定的注册表路径不存在: {explicit_path}")
return None
candidates = [
os.path.join("unilabos_data", REGISTRY_FILENAME),
REGISTRY_FILENAME,
]
for c in candidates:
if os.path.isfile(c):
return c
script_dir = os.path.dirname(os.path.abspath(__file__))
workspace_root = os.path.normpath(os.path.join(script_dir, "..", "..", ".."))
for c in candidates:
path = os.path.join(workspace_root, c)
if os.path.isfile(path):
return path
cwd = os.getcwd()
for _ in range(5):
parent = os.path.dirname(cwd)
if parent == cwd:
break
cwd = parent
for c in candidates:
path = os.path.join(cwd, c)
if os.path.isfile(path):
return path
return None
def load_registry(path):
with open(path, "r", encoding="utf-8") as f:
return json.load(f)
def build_registry_index(registry_data):
"""构建 device_id → action_value_mappings 的索引"""
index = {}
for res in registry_data.get("resources", []):
rid = res.get("id", "")
avm = res.get("class", {}).get("action_value_mappings", {})
if rid and avm:
index[rid] = avm
return index
def flatten_goal_schema(action_data):
"""从 action_value_mappings 条目中提取 goal 层的 schema"""
schema = action_data.get("schema", {})
goal_schema = schema.get("properties", {}).get("goal", {})
return goal_schema if goal_schema else schema
def build_param_template(goal_schema):
"""根据 goal schema 生成 param 模板,含类型标注"""
properties = goal_schema.get("properties", {})
required = set(goal_schema.get("required", []))
template = {}
for field_name, field_def in properties.items():
if field_name == "unilabos_device_id":
continue
ftype = field_def.get("type", "any")
default = field_def.get("default")
if default is not None:
template[field_name] = default
elif ftype == "string":
template[field_name] = f"$TODO ({ftype}, {'required' if field_name in required else 'optional'})"
elif ftype == "number" or ftype == "integer":
template[field_name] = 0
elif ftype == "boolean":
template[field_name] = False
elif ftype == "array":
template[field_name] = []
elif ftype == "object":
template[field_name] = {}
else:
template[field_name] = f"$TODO ({ftype})"
return template
def fetch_workflow_detail(base_url, auth_token, workflow_uuid):
"""调用 workflow detail API"""
url = f"{base_url}/api/v1/lab/workflow/template/detail/{workflow_uuid}"
req = Request(url, method="GET")
req.add_header("Authorization", f"Lab {auth_token}")
try:
with urlopen(req, timeout=30) as resp:
return json.loads(resp.read().decode("utf-8"))
except HTTPError as e:
body = e.read().decode("utf-8", errors="replace")
print(f"API 错误 {e.code}: {body}")
return None
except URLError as e:
print(f"网络错误: {e.reason}")
return None
def extract_nodes_from_response(response):
"""
从 workflow detail 响应中提取 action 节点列表。
适配多种可能的响应格式。
返回: [(node_uuid, resource_template_name, node_template_name, existing_param), ...]
"""
data = response.get("data", response)
search_keys = ["nodes", "workflow_nodes", "node_list", "steps"]
nodes_raw = None
for key in search_keys:
if key in data and isinstance(data[key], list):
nodes_raw = data[key]
break
if nodes_raw is None:
if isinstance(data, list):
nodes_raw = data
else:
for v in data.values():
if isinstance(v, list) and len(v) > 0 and isinstance(v[0], dict):
nodes_raw = v
break
if not nodes_raw:
print("警告: 未能从响应中提取节点列表")
print("响应顶层 keys:", list(data.keys()) if isinstance(data, dict) else type(data).__name__)
return []
result = []
for node in nodes_raw:
if not isinstance(node, dict):
continue
node_uuid = (
node.get("uuid")
or node.get("node_uuid")
or node.get("id")
or ""
)
resource_name = (
node.get("resource_template_name")
or node.get("device_id")
or node.get("resource_name")
or node.get("device_name")
or ""
)
template_name = (
node.get("node_template_name")
or node.get("action_name")
or node.get("template_name")
or node.get("action")
or node.get("name")
or ""
)
existing_param = node.get("param", {}) or {}
if node_uuid:
result.append((node_uuid, resource_name, template_name, existing_param))
return result
def generate_template(nodes, registry_index, rounds):
"""生成 notebook 提交模板"""
node_params = []
schema_info = {}
datas_template = []
for node_uuid, resource_name, template_name, existing_param in nodes:
param_template = {}
matched = False
if resource_name and template_name and resource_name in registry_index:
avm = registry_index[resource_name]
if template_name in avm:
goal_schema = flatten_goal_schema(avm[template_name])
param_template = build_param_template(goal_schema)
goal_default = avm[template_name].get("goal_default", {})
if goal_default:
for k, v in goal_default.items():
if k in param_template and v is not None:
param_template[k] = v
matched = True
schema_info[node_uuid] = {
"device_id": resource_name,
"action_name": template_name,
"action_type": avm[template_name].get("type", ""),
"schema_properties": list(goal_schema.get("properties", {}).keys()),
"required": goal_schema.get("required", []),
}
if not matched and existing_param:
param_template = existing_param
if not matched and not existing_param:
schema_info[node_uuid] = {
"device_id": resource_name,
"action_name": template_name,
"warning": "未在本地注册表中找到匹配的 action schema",
}
datas_template.append({
"node_uuid": node_uuid,
"param": param_template,
"sample_params": [
{
"container_uuid": "$TODO_CONTAINER_UUID",
"sample_value": {
"liquid_names": "$TODO_LIQUID_NAME",
"volumes": 0,
},
}
],
})
for i in range(rounds):
node_params.append({
"sample_uuids": f"$TODO_SAMPLE_UUID_ROUND_{i + 1}",
"datas": copy.deepcopy(datas_template),
})
return {
"lab_uuid": "$TODO_LAB_UUID",
"workflow_uuid": "$TODO_WORKFLOW_UUID",
"name": "$TODO_EXPERIMENT_NAME",
"node_params": node_params,
"_schema_info仅参考提交时删除": schema_info,
}
def parse_args(argv):
"""简单的参数解析"""
opts = {
"auth": None,
"base": None,
"workflow_uuid": None,
"registry": None,
"rounds": 1,
"output": "notebook_template.json",
"dump_response": False,
}
i = 0
while i < len(argv):
arg = argv[i]
if arg == "--auth" and i + 1 < len(argv):
opts["auth"] = argv[i + 1]
i += 2
elif arg == "--base" and i + 1 < len(argv):
opts["base"] = argv[i + 1].rstrip("/")
i += 2
elif arg == "--workflow-uuid" and i + 1 < len(argv):
opts["workflow_uuid"] = argv[i + 1]
i += 2
elif arg == "--registry" and i + 1 < len(argv):
opts["registry"] = argv[i + 1]
i += 2
elif arg == "--rounds" and i + 1 < len(argv):
opts["rounds"] = int(argv[i + 1])
i += 2
elif arg == "--output" and i + 1 < len(argv):
opts["output"] = argv[i + 1]
i += 2
elif arg == "--dump-response":
opts["dump_response"] = True
i += 1
else:
print(f"未知参数: {arg}")
i += 1
return opts
def main():
opts = parse_args(sys.argv[1:])
if not opts["auth"] or not opts["base"] or not opts["workflow_uuid"]:
print("用法:")
print(" python gen_notebook_params.py --auth <token> --base <url> --workflow-uuid <uuid> [选项]")
print()
print("必需参数:")
print(" --auth <token> Lab tokenbase64(ak:sk)")
print(" --base <url> API 基础 URL")
print(" --workflow-uuid <uuid> 目标 workflow UUID")
print()
print("可选参数:")
print(" --registry <path> 注册表文件路径(默认自动搜索)")
print(" --rounds <n> 实验轮次数(默认 1")
print(" --output <path> 输出文件路径(默认 notebook_template.json")
print(" --dump-response 打印 API 原始响应")
sys.exit(1)
# 1. 查找并加载本地注册表
registry_path = find_registry(opts["registry"])
registry_index = {}
if registry_path:
mtime = os.path.getmtime(registry_path)
gen_time = datetime.fromtimestamp(mtime).strftime("%Y-%m-%d %H:%M:%S")
print(f"注册表: {registry_path} (生成时间: {gen_time})")
registry_data = load_registry(registry_path)
registry_index = build_registry_index(registry_data)
print(f"已索引 {len(registry_index)} 个设备的 action schemas")
else:
print("警告: 未找到本地注册表,将跳过 param 模板生成")
print(" 提交时需要手动填写各节点的 param 字段")
# 2. 获取 workflow 详情
print(f"\n正在获取 workflow 详情: {opts['workflow_uuid']}")
response = fetch_workflow_detail(opts["base"], opts["auth"], opts["workflow_uuid"])
if not response:
print("错误: 无法获取 workflow 详情")
sys.exit(1)
if opts["dump_response"]:
print("\n=== API 原始响应 ===")
print(json.dumps(response, indent=2, ensure_ascii=False)[:5000])
print("=== 响应结束(截断至 5000 字符) ===\n")
# 3. 提取节点
nodes = extract_nodes_from_response(response)
if not nodes:
print("错误: 未能从 workflow 中提取任何 action 节点")
print("请使用 --dump-response 查看原始响应结构")
sys.exit(1)
print(f"\n找到 {len(nodes)} 个 action 节点:")
print(f" {'节点 UUID':<40} {'设备 ID':<30} {'动作名':<25} {'Schema'}")
print(" " + "-" * 110)
for node_uuid, resource_name, template_name, _ in nodes:
matched = "" if (resource_name in registry_index and
template_name in registry_index.get(resource_name, {})) else ""
print(f" {node_uuid:<40} {resource_name:<30} {template_name:<25} {matched}")
# 4. 生成模板
template = generate_template(nodes, registry_index, opts["rounds"])
template["workflow_uuid"] = opts["workflow_uuid"]
output_path = opts["output"]
with open(output_path, "w", encoding="utf-8") as f:
json.dump(template, f, indent=2, ensure_ascii=False)
print(f"\n模板已写入: {output_path}")
print(f" 轮次数: {opts['rounds']}")
print(f" 节点数/轮: {len(nodes)}")
print()
print("下一步:")
print(" 1. 打开模板文件,将 $TODO 占位符替换为实际值")
print(" 2. 删除 _schema_info 字段(仅供参考)")
print(" 3. 使用 POST /api/v1/lab/notebook 提交")
if __name__ == "__main__":
main()

View File

@@ -1,328 +0,0 @@
---
name: create-device-skill
description: Create a skill for any Uni-Lab device by extracting action schemas from the device registry. Use when the user wants to create a new device skill, add device API documentation, or set up action schemas for a device.
---
# 创建设备 Skill 指南
本 meta-skill 教你如何为任意 Uni-Lab-OS 设备创建完整的 API 操作技能(参考 `unilab-device-api` 的成功案例)。
## 数据源
- **设备注册表**: `unilabos_data/req_device_registry_upload.json`
- **结构**: `{ "resources": [{ "id": "<device_id>", "class": { "module": "<python_module:ClassName>", "action_value_mappings": { ... } } }] }`
- **生成时机**: `unilab` 启动并完成注册表上传后自动生成
- **module 字段**: 格式 `unilabos.devices.xxx.yyy:ClassName`,可转为源码路径 `unilabos/devices/xxx/yyy.py`,阅读源码可了解参数含义和设备行为
## 创建流程
### Step 0 — 收集必备信息(缺一不可,否则询问后终止)
开始前**必须**确认以下 4 项信息全部就绪。如果用户未提供任何一项,**立即询问并终止当前流程**,等用户补齐后再继续。
向用户提问:「请提供你的 unilab 启动参数,我需要以下信息:」
#### 必备项 ①ak / sk认证凭据
来源:启动命令的 `--ak` `--sk` 参数,或 config.py 中的 `ak = "..."` `sk = "..."`
获取后立即生成 AUTH token
```bash
python ./scripts/gen_auth.py <ak> <sk>
# 或从 config.py 提取
python ./scripts/gen_auth.py --config <config.py>
```
认证算法:`base64(ak:sk)``Authorization: Lab <token>`
#### 必备项 ②:--addr目标环境
决定 API 请求发往哪个服务器。从启动命令的 `--addr` 参数获取:
| `--addr` 值 | BASE URL |
|-------------|----------|
| `test` | `https://uni-lab.test.bohrium.com` |
| `uat` | `https://uni-lab.uat.bohrium.com` |
| `local` | `http://127.0.0.1:48197` |
| 不传(默认) | `https://uni-lab.bohrium.com` |
| 其他自定义 URL | 直接使用该 URL |
#### 必备项 ③req_device_registry_upload.json设备注册表
数据文件由 `unilab` 启动时自动生成,需要定位它:
**推断 working_dir**(即 `unilabos_data` 所在目录):
| 条件 | working_dir 取值 |
|------|------------------|
| 传了 `--working_dir` | `<working_dir>/unilabos_data/`(若子目录已存在则直接用) |
| 仅传了 `--config` | `<config 文件所在目录>/unilabos_data/` |
| 都没传 | `<当前工作目录>/unilabos_data/` |
**按优先级搜索文件**
```
<推断的 working_dir>/unilabos_data/req_device_registry_upload.json
<推断的 working_dir>/req_device_registry_upload.json
<workspace 根目录>/unilabos_data/req_device_registry_upload.json
```
也可以直接 Glob 搜索:`**/req_device_registry_upload.json`
找到后**必须检查文件修改时间**并告知用户:「找到注册表文件 `<路径>`,生成于 `<时间>`。请确认这是最近一次启动生成的。」超过 1 天提醒用户是否需要重新启动 `unilab`
**如果文件不存在** → 告知用户先运行 `unilab` 启动命令,等日志出现 `注册表响应数据已保存` 后再执行本流程。**终止。**
#### 必备项 ④:目标设备
用户需要明确要为哪个设备创建 skill。可以是设备名称如「PRCXI 移液站」)或 device_id`liquid_handler.prcxi`)。
如果用户不确定,运行提取脚本列出所有设备供选择:
```bash
python ./scripts/extract_device_actions.py --registry <找到的文件路径>
```
#### 完整示例
用户提供:
```
--ak a1fd9d4e-xxxx-xxxx-xxxx-d9a69c09f0fd
--sk 136ff5c6-xxxx-xxxx-xxxx-a03e301f827b
--addr test
--port 8003
--disable_browser
```
从中提取:
- ✅ ak/sk → 运行 `gen_auth.py` 得到 `AUTH="Authorization: Lab YTFmZDlk..."`
- ✅ addr=test → `BASE=https://uni-lab.test.bohrium.com`
- ✅ 搜索 `unilabos_data/req_device_registry_upload.json` → 找到并确认时间
- ✅ 用户指明目标设备 → 如 `liquid_handler.prcxi`
**四项全部就绪后才进入 Step 1。**
### Step 1 — 列出可用设备
运行提取脚本,列出所有设备及 action 数量和 Python 源码路径,让用户选择:
```bash
# 自动搜索(默认在 unilabos_data/ 和当前目录查找)
python ./scripts/extract_device_actions.py
# 指定注册表文件路径
python ./scripts/extract_device_actions.py --registry <path/to/req_device_registry_upload.json>
```
脚本输出包含每个设备的 **Python 源码路径**(从 `class.module` 转换),可用于后续阅读源码理解参数含义。
### Step 2 — 提取 Action Schema
用户选择设备后,运行提取脚本:
```bash
python ./scripts/extract_device_actions.py [--registry <path>] <device_id> ./skills/<skill-name>/actions/
```
脚本会显示设备的 Python 源码路径和类名,方便阅读源码了解参数含义。
每个 action 生成一个 JSON 文件,包含:
- `type` — 作为 API 调用的 `action_type`
- `schema` — 完整 JSON Schema`properties.goal.properties` 参数定义)
- `goal` — goal 字段映射(含占位符 `$placeholder`
- `goal_default` — 默认值
### Step 3 — 写 action-index.md
按模板为每个 action 写条目:
```markdown
### `<action_name>`
<用途描述(一句话)>
- **Schema**: [`actions/<filename>.json`](actions/<filename>.json)
- **核心参数**: `param1`, `param2`(从 schema.required 获取)
- **可选参数**: `param3`, `param4`
- **占位符字段**: `field`(需填入物料信息,值以 `$` 开头)
```
描述规则:
-`schema.properties` 读参数列表schema 已提升为 goal 内容)
-`schema.required` 区分核心/可选参数
- 按功能分类(移液、枪头、外设等)
- 标注 `placeholder_keys` 中的字段类型:
- `unilabos_resources`**ResourceSlot**,填入 `{id, name, uuid}`id 是路径格式,从资源树取物料节点)
- `unilabos_devices`**DeviceSlot**,填入路径字符串如 `"/host_node"`(从资源树筛选 type=device
- `unilabos_nodes`**NodeSlot**,填入路径字符串如 `"/PRCXI/PRCXI_Deck"`(资源树中任意节点)
- `unilabos_class`**ClassSlot**,填入类名字符串如 `"container"`(从注册表查找)
- array 类型字段 → `[{id, name, uuid}, ...]`
- 特殊:`create_resource``res_id`ResourceSlot可填不存在的路径
### Step 4 — 写 SKILL.md
直接复用 `unilab-device-api` 的 API 模板10 个 endpoint修改
- 设备名称
- Action 数量
- 目录列表
- Session state 中的 `device_name`
- **AUTH 头** — 使用 Step 0 中 `gen_auth.py` 生成的 `Authorization: Lab <token>`(不要硬编码 `Api` 类型的 key
- **Python 源码路径** — 在 SKILL.md 开头注明设备对应的源码文件,方便参考参数含义
- **Slot 字段表** — 列出本设备哪些 action 的哪些字段需要填入 Slot物料/设备/节点/类名)
API 模板结构:
```markdown
## 设备信息
- device_id, Python 源码路径, 设备类名
## 前置条件(缺一不可)
- ak/sk → AUTH, --addr → BASE URL
## Session State
- lab_uuid通过 API #1 自动匹配,不要问用户), device_name
## API Endpoints (10 个)
# 注意:
# - #1 获取 lab 列表 + 自动匹配 lab_uuid遍历 is_admin 的 lab
# 调用 /lab/info/{uuid} 比对 access_key == ak
# - #2 创建工作流用 POST /lab/workflow
# - #10 获取资源树路径含 lab_uuid: /lab/material/download/{lab_uuid}
## Placeholder Slot 填写规则
- unilabos_resources → ResourceSlot → {"id":"/path/name","name":"name","uuid":"xxx"}
- unilabos_devices → DeviceSlot → "/parent/device" 路径字符串
- unilabos_nodes → NodeSlot → "/parent/node" 路径字符串
- unilabos_class → ClassSlot → "class_name" 字符串
- 特例create_resource 的 res_id 允许填不存在的路径
- 列出本设备所有 Slot 字段、类型及含义
## 渐进加载策略
## 完整工作流 Checklist
```
### Step 5 — 验证
检查文件完整性:
- [ ] `SKILL.md` 包含 10 个 API endpoint
- [ ] `SKILL.md` 包含 Placeholder Slot 填写规则ResourceSlot / DeviceSlot / NodeSlot / ClassSlot + create_resource 特例)和本设备的 Slot 字段表
- [ ] `action-index.md` 列出所有 action 并有描述
- [ ] `actions/` 目录中每个 action 有对应 JSON 文件
- [ ] JSON 文件包含 `type`, `schema`(已提升为 goal 内容), `goal`, `goal_default`, `placeholder_keys` 字段
- [ ] 描述能让 agent 判断该用哪个 action
## Action JSON 文件结构
```json
{
"type": "LiquidHandlerTransfer", // → API 的 action_type
"goal": { // goal 字段映射
"sources": "sources",
"targets": "targets",
"tip_racks": "tip_racks",
"asp_vols": "asp_vols"
},
"schema": { // ← 直接是 goal 的 schema已提升
"type": "object",
"properties": { // 参数定义(即请求中 goal 的字段)
"sources": { "type": "array", "items": { "type": "object" } },
"targets": { "type": "array", "items": { "type": "object" } },
"asp_vols": { "type": "array", "items": { "type": "number" } }
},
"required": [...],
"_unilabos_placeholder_info": { // ← Slot 类型标记
"sources": "unilabos_resources",
"targets": "unilabos_resources",
"tip_racks": "unilabos_resources"
}
},
"goal_default": { ... }, // 默认值
"placeholder_keys": { // ← 汇总所有 Slot 字段
"sources": "unilabos_resources", // ResourceSlot
"targets": "unilabos_resources",
"tip_racks": "unilabos_resources",
"target_device_id": "unilabos_devices" // DeviceSlot
}
}
```
> **注意**`schema` 已由脚本从原始 `schema.properties.goal` 提升为顶层,直接包含参数定义。
> `schema.properties` 中的字段即为 API 请求 `param.goal` 中的字段。
## Placeholder Slot 类型体系
`placeholder_keys` / `_unilabos_placeholder_info` 中有 4 种值,对应不同的填写方式:
| placeholder 值 | Slot 类型 | 填写格式 | 选取范围 |
|---------------|-----------|---------|---------|
| `unilabos_resources` | ResourceSlot | `{"id": "/path/name", "name": "name", "uuid": "xxx"}` | 仅**物料**节点(不含设备) |
| `unilabos_devices` | DeviceSlot | `"/parent/device_name"` | 仅**设备**节点type=device路径字符串 |
| `unilabos_nodes` | NodeSlot | `"/parent/node_name"` | **设备 + 物料**,即所有节点,路径字符串 |
| `unilabos_class` | ClassSlot | `"class_name"` | 注册表中已上报的资源类 name |
### ResourceSlot`unilabos_resources`
最常见的类型。从资源树中选取**物料**节点(孔板、枪头盒、试剂槽等):
```json
{"id": "/workstation/container1", "name": "container1", "uuid": "ff149a9a-2cb8-419d-8db5-d3ba056fb3c2"}
```
- 单个schema type=object`{"id": "/path/name", "name": "name", "uuid": "xxx"}`
- 数组schema type=array`[{"id": "/path/a", "name": "a", "uuid": "xxx"}, ...]`
- `id` 本身是从 parent 计算的路径格式
- 根据 action 语义选择正确的物料(如 `sources` = 液体来源,`targets` = 目标位置)
> **特例**`create_resource` 的 `res_id` 字段,目标物料可能**尚不存在**,此时直接填写期望的路径(如 `"/workstation/container1"`),不需要 uuid。
### DeviceSlot`unilabos_devices`
填写**设备路径字符串**。从资源树中筛选 type=device 的节点,从 parent 计算路径:
```
"/host_node"
"/bioyond_cell/reaction_station"
```
- 只填路径字符串,不需要 `{id, uuid}` 对象
- 根据 action 语义选择正确的设备(如 `target_device_id` = 目标设备)
### NodeSlot`unilabos_nodes`
范围 = 设备 + 物料。即资源树中**所有节点**都可以选,填写**路径字符串**
```
"/PRCXI/PRCXI_Deck"
```
- 使用场景:当参数既可能指向物料也可能指向设备时(如 `PumpTransferProtocol``from_vessel`/`to_vessel``create_resource``parent`
### ClassSlot`unilabos_class`
填写注册表中已上报的**资源类 name**。从本地 `req_resource_registry_upload.json` 中查找:
```
"container"
```
### 通过 API #10 获取资源树
```bash
curl -s -X GET "$BASE/api/v1/lab/material/download/$lab_uuid" -H "$AUTH"
```
注意 `lab_uuid` 在路径中(不是查询参数)。资源树返回所有节点,每个节点包含 `id`(路径格式)、`name``uuid``type``parent` 等字段。填写 Slot 时需根据 placeholder 类型筛选正确的节点。
## 最终目录结构
```
./<skill-name>/
├── SKILL.md # API 端点 + 渐进加载指引
├── action-index.md # 动作索引:描述/用途/核心参数
└── actions/ # 每个 action 的完整 JSON Schema
├── action1.json
├── action2.json
└── ...
```

View File

@@ -1,200 +0,0 @@
#!/usr/bin/env python3
"""
从 req_device_registry_upload.json 中提取指定设备的 action schema。
用法:
# 列出所有设备及 action 数量(自动搜索注册表文件)
python extract_device_actions.py
# 指定注册表文件路径
python extract_device_actions.py --registry <path/to/req_device_registry_upload.json>
# 提取指定设备的 action 到目录
python extract_device_actions.py <device_id> <output_dir>
python extract_device_actions.py --registry <path> <device_id> <output_dir>
示例:
python extract_device_actions.py --registry unilabos_data/req_device_registry_upload.json
python extract_device_actions.py liquid_handler.prcxi .cursor/skills/unilab-device-api/actions/
"""
import json
import os
import sys
from datetime import datetime
REGISTRY_FILENAME = "req_device_registry_upload.json"
def find_registry(explicit_path=None):
"""
查找 req_device_registry_upload.json 文件。
搜索优先级:
1. 用户通过 --registry 显式指定的路径
2. <cwd>/unilabos_data/req_device_registry_upload.json
3. <cwd>/req_device_registry_upload.json
4. <script所在目录>/../../.. (workspace根) 下的 unilabos_data/
5. 向上逐级搜索父目录(最多 5 层)
"""
if explicit_path:
if os.path.isfile(explicit_path):
return explicit_path
if os.path.isdir(explicit_path):
fp = os.path.join(explicit_path, REGISTRY_FILENAME)
if os.path.isfile(fp):
return fp
print(f"警告: 指定的路径不存在: {explicit_path}")
return None
candidates = [
os.path.join("unilabos_data", REGISTRY_FILENAME),
REGISTRY_FILENAME,
]
for c in candidates:
if os.path.isfile(c):
return c
script_dir = os.path.dirname(os.path.abspath(__file__))
workspace_root = os.path.normpath(os.path.join(script_dir, "..", "..", ".."))
for c in candidates:
path = os.path.join(workspace_root, c)
if os.path.isfile(path):
return path
cwd = os.getcwd()
for _ in range(5):
parent = os.path.dirname(cwd)
if parent == cwd:
break
cwd = parent
for c in candidates:
path = os.path.join(cwd, c)
if os.path.isfile(path):
return path
return None
def load_registry(path):
with open(path, 'r', encoding='utf-8') as f:
return json.load(f)
def list_devices(data):
"""列出所有包含 action_value_mappings 的设备,同时返回 module 路径"""
resources = data.get('resources', [])
devices = []
for res in resources:
rid = res.get('id', '')
cls = res.get('class', {})
avm = cls.get('action_value_mappings', {})
module = cls.get('module', '')
if avm:
devices.append((rid, len(avm), module))
return devices
def flatten_schema_to_goal(action_data):
"""将 schema 中嵌套的 goal 内容提升为顶层 schema去掉 feedback/result 包装"""
schema = action_data.get('schema', {})
goal_schema = schema.get('properties', {}).get('goal', {})
if goal_schema:
action_data = dict(action_data)
action_data['schema'] = goal_schema
return action_data
def extract_actions(data, device_id, output_dir):
"""提取指定设备的 action schema 到独立 JSON 文件"""
resources = data.get('resources', [])
for res in resources:
if res.get('id') == device_id:
cls = res.get('class', {})
module = cls.get('module', '')
avm = cls.get('action_value_mappings', {})
if not avm:
print(f"设备 {device_id} 没有 action_value_mappings")
return []
if module:
py_path = module.split(":")[0].replace(".", "/") + ".py"
class_name = module.split(":")[-1] if ":" in module else ""
print(f"Python 源码: {py_path}")
if class_name:
print(f"设备类: {class_name}")
os.makedirs(output_dir, exist_ok=True)
written = []
for action_name in sorted(avm.keys()):
action_data = flatten_schema_to_goal(avm[action_name])
filename = action_name.replace('-', '_') + '.json'
filepath = os.path.join(output_dir, filename)
with open(filepath, 'w', encoding='utf-8') as f:
json.dump(action_data, f, indent=2, ensure_ascii=False)
written.append(filename)
print(f" {filepath}")
return written
print(f"设备 {device_id} 未找到")
return []
def main():
args = sys.argv[1:]
explicit_registry = None
if "--registry" in args:
idx = args.index("--registry")
if idx + 1 < len(args):
explicit_registry = args[idx + 1]
args = args[:idx] + args[idx + 2:]
else:
print("错误: --registry 需要指定路径")
sys.exit(1)
registry_path = find_registry(explicit_registry)
if not registry_path:
print(f"错误: 找不到 {REGISTRY_FILENAME}")
print()
print("解决方法:")
print(" 1. 先运行 unilab 启动命令,等待注册表生成")
print(" 2. 用 --registry 指定文件路径:")
print(f" python {sys.argv[0]} --registry <path/to/{REGISTRY_FILENAME}>")
print()
print("搜索过的路径:")
for p in [
os.path.join("unilabos_data", REGISTRY_FILENAME),
REGISTRY_FILENAME,
os.path.join("<workspace_root>", "unilabos_data", REGISTRY_FILENAME),
]:
print(f" - {p}")
sys.exit(1)
print(f"注册表: {registry_path}")
mtime = os.path.getmtime(registry_path)
gen_time = datetime.fromtimestamp(mtime).strftime("%Y-%m-%d %H:%M:%S")
size_mb = os.path.getsize(registry_path) / (1024 * 1024)
print(f"生成时间: {gen_time} (文件大小: {size_mb:.1f} MB)")
data = load_registry(registry_path)
if len(args) == 0:
devices = list_devices(data)
print(f"\n找到 {len(devices)} 个设备:")
print(f"{'设备 ID':<50} {'Actions':>7} {'Python 模块'}")
print("-" * 120)
for did, count, module in sorted(devices, key=lambda x: x[0]):
py_path = module.split(":")[0].replace(".", "/") + ".py" if module else ""
print(f"{did:<50} {count:>7} {py_path}")
elif len(args) == 2:
device_id = args[0]
output_dir = args[1]
print(f"\n提取 {device_id} 的 actions 到 {output_dir}/")
written = extract_actions(data, device_id, output_dir)
if written:
print(f"\n共写入 {len(written)} 个 action 文件")
else:
print("用法:")
print(" python extract_device_actions.py [--registry <path>] # 列出设备")
print(" python extract_device_actions.py [--registry <path>] <device_id> <dir> # 提取 actions")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -1,69 +0,0 @@
#!/usr/bin/env python3
"""
从 ak/sk 生成 UniLab API Authorization header。
算法: base64(ak:sk) → "Authorization: Lab <token>"
用法:
python gen_auth.py <ak> <sk>
python gen_auth.py --config <config.py>
示例:
python gen_auth.py myak mysk
python gen_auth.py --config experiments/config.py
"""
import base64
import re
import sys
def gen_auth(ak: str, sk: str) -> str:
token = base64.b64encode(f"{ak}:{sk}".encode("utf-8")).decode("utf-8")
return token
def extract_from_config(config_path: str) -> tuple:
"""从 config.py 中提取 ak 和 sk"""
with open(config_path, "r", encoding="utf-8") as f:
content = f.read()
ak_match = re.search(r'''ak\s*=\s*["']([^"']+)["']''', content)
sk_match = re.search(r'''sk\s*=\s*["']([^"']+)["']''', content)
if not ak_match or not sk_match:
return None, None
return ak_match.group(1), sk_match.group(1)
def main():
args = sys.argv[1:]
if len(args) == 2 and args[0] == "--config":
ak, sk = extract_from_config(args[1])
if not ak or not sk:
print(f"错误: 在 {args[1]} 中未找到 ak/sk 配置")
print("期望格式: ak = \"xxx\" sk = \"xxx\"")
sys.exit(1)
print(f"配置文件: {args[1]}")
elif len(args) == 2:
ak, sk = args
else:
print("用法:")
print(" python gen_auth.py <ak> <sk>")
print(" python gen_auth.py --config <config.py>")
sys.exit(1)
token = gen_auth(ak, sk)
print(f"ak: {ak}")
print(f"sk: {sk}")
print()
print(f"Authorization header:")
print(f" Authorization: Lab {token}")
print()
print(f"curl 用法:")
print(f' curl -H "Authorization: Lab {token}" ...')
print()
print(f"Shell 变量:")
print(f' AUTH="Authorization: Lab {token}"')
if __name__ == "__main__":
main()

View File

@@ -1,19 +0,0 @@
version: 2
updates:
# GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
target-branch: "dev"
schedule:
interval: "weekly"
day: "monday"
time: "06:00"
open-pull-requests-limit: 5
reviewers:
- "msgcenterpy-team"
labels:
- "dependencies"
- "github-actions"
commit-message:
prefix: "ci"
include: "scope"

View File

@@ -1,67 +0,0 @@
name: CI Check
on:
push:
branches: [main, dev]
pull_request:
branches: [main, dev]
jobs:
registry-check:
runs-on: windows-latest
env:
# Fix Unicode encoding issue on Windows runner (cp1252 -> utf-8)
PYTHONIOENCODING: utf-8
PYTHONUTF8: 1
defaults:
run:
shell: cmd
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Setup Miniforge
uses: conda-incubator/setup-miniconda@v3
with:
miniforge-version: latest
use-mamba: true
channels: robostack-staging,conda-forge,uni-lab
channel-priority: flexible
activate-environment: check-env
auto-update-conda: false
show-channel-urls: true
- name: Install ROS dependencies, uv and unilabos-msgs
run: |
echo Installing ROS dependencies...
mamba install -n check-env conda-forge::uv conda-forge::opencv robostack-staging::ros-humble-ros-core robostack-staging::ros-humble-action-msgs robostack-staging::ros-humble-std-msgs robostack-staging::ros-humble-geometry-msgs robostack-staging::ros-humble-control-msgs robostack-staging::ros-humble-nav2-msgs uni-lab::ros-humble-unilabos-msgs robostack-staging::ros-humble-cv-bridge robostack-staging::ros-humble-vision-opencv robostack-staging::ros-humble-tf-transformations robostack-staging::ros-humble-moveit-msgs robostack-staging::ros-humble-tf2-ros robostack-staging::ros-humble-tf2-ros-py conda-forge::transforms3d -c robostack-staging -c conda-forge -c uni-lab -y
- name: Install pip dependencies and unilabos
run: |
call conda activate check-env
echo Installing pip dependencies...
uv pip install -r unilabos/utils/requirements.txt
uv pip install pywinauto git+https://github.com/Xuwznln/pylabrobot.git
uv pip uninstall enum34 || echo enum34 not installed, skipping
uv pip install .
- name: Run check mode (AST registry validation)
run: |
call conda activate check-env
echo Running check mode...
python -m unilabos --check_mode --skip_env_check
- name: Check for uncommitted changes
shell: bash
run: |
if ! git diff --exit-code; then
echo "::error::检测到文件变化!请先在本地运行 'python -m unilabos --complete_registry' 并提交变更"
echo "变化的文件:"
git diff --name-only
exit 1
fi
echo "检查通过:无文件变化"

View File

@@ -13,11 +13,6 @@ on:
required: false
default: 'win-64'
type: string
build_full:
description: '是否构建完整版 unilabos-full (默认构建轻量版 unilabos)'
required: false
default: false
type: boolean
jobs:
build-conda-pack:
@@ -29,7 +24,7 @@ jobs:
platform: linux-64
env_file: unilabos-linux-64.yaml
script_ext: sh
- os: macos-15 # Intel (via Rosetta)
- os: macos-13 # Intel
platform: osx-64
env_file: unilabos-osx-64.yaml
script_ext: sh
@@ -62,7 +57,7 @@ jobs:
echo "should_build=false" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v6
- uses: actions/checkout@v4
if: steps.should_build.outputs.should_build == 'true'
with:
ref: ${{ github.event.inputs.branch }}
@@ -74,7 +69,7 @@ jobs:
with:
miniforge-version: latest
use-mamba: true
python-version: '3.11.14'
python-version: '3.11.11'
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: flexible
activate-environment: unilab
@@ -86,14 +81,7 @@ jobs:
run: |
echo Installing unilabos and dependencies to unilab environment...
echo Using mamba for faster and more reliable dependency resolution...
echo Build full: ${{ github.event.inputs.build_full }}
if "${{ github.event.inputs.build_full }}"=="true" (
echo Installing unilabos-full ^(complete package^)...
mamba install -n unilab uni-lab::unilabos-full conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
) else (
echo Installing unilabos ^(minimal package^)...
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
)
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
- name: Install conda-pack, unilabos and dependencies (Unix)
if: steps.should_build.outputs.should_build == 'true' && matrix.platform != 'win-64'
@@ -101,14 +89,7 @@ jobs:
run: |
echo "Installing unilabos and dependencies to unilab environment..."
echo "Using mamba for faster and more reliable dependency resolution..."
echo "Build full: ${{ github.event.inputs.build_full }}"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo "Installing unilabos-full (complete package)..."
mamba install -n unilab uni-lab::unilabos-full conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
else
echo "Installing unilabos (minimal package)..."
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
fi
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
- name: Get latest ros-humble-unilabos-msgs version (Windows)
if: steps.should_build.outputs.should_build == 'true' && matrix.platform == 'win-64'
@@ -312,7 +293,7 @@ jobs:
- name: Upload distribution package
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v4
with:
name: unilab-pack-${{ matrix.platform }}-${{ github.event.inputs.branch }}
path: dist-package/
@@ -327,12 +308,7 @@ jobs:
echo ==========================================
echo Platform: ${{ matrix.platform }}
echo Branch: ${{ github.event.inputs.branch }}
echo Python version: 3.11.14
if "${{ github.event.inputs.build_full }}"=="true" (
echo Package: unilabos-full ^(complete^)
) else (
echo Package: unilabos ^(minimal^)
)
echo Python version: 3.11.11
echo.
echo Distribution package contents:
dir dist-package
@@ -352,12 +328,7 @@ jobs:
echo "=========================================="
echo "Platform: ${{ matrix.platform }}"
echo "Branch: ${{ github.event.inputs.branch }}"
echo "Python version: 3.11.14"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo "Package: unilabos-full (complete)"
else
echo "Package: unilabos (minimal)"
fi
echo "Python version: 3.11.11"
echo ""
echo "Distribution package contents:"
ls -lh dist-package/

View File

@@ -1,12 +1,10 @@
name: Deploy Docs
on:
# 在 CI Check 成功后自动触发(仅 main 分支)
workflow_run:
workflows: ["CI Check"]
types: [completed]
push:
branches: [main]
pull_request:
branches: [main]
# 手动触发
workflow_dispatch:
inputs:
branch:
@@ -35,19 +33,12 @@ concurrency:
jobs:
# Build documentation
build:
# 只在以下情况运行:
# 1. workflow_run 触发且 CI Check 成功
# 2. 手动触发
if: |
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
# workflow_run 时使用触发工作流的分支,手动触发时使用输入的分支
ref: ${{ github.event.workflow_run.head_branch || github.event.inputs.branch || github.ref }}
ref: ${{ github.event.inputs.branch || github.ref }}
fetch-depth: 0
- name: Setup Miniforge (with mamba)
@@ -55,7 +46,7 @@ jobs:
with:
miniforge-version: latest
use-mamba: true
python-version: '3.11.14'
python-version: '3.11.11'
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: flexible
activate-environment: unilab
@@ -84,10 +75,8 @@ jobs:
- name: Setup Pages
id: pages
uses: actions/configure-pages@v5
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
uses: actions/configure-pages@v4
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
- name: Build Sphinx documentation
run: |
@@ -105,18 +94,14 @@ jobs:
test -f docs/_build/html/index.html && echo "✓ index.html exists" || echo "✗ index.html missing"
- name: Upload build artifacts
uses: actions/upload-pages-artifact@v4
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
uses: actions/upload-pages-artifact@v3
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
with:
path: docs/_build/html
# Deploy to GitHub Pages
deploy:
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}

View File

@@ -1,16 +1,11 @@
name: Multi-Platform Conda Build
on:
# 在 CI Check 工作流完成后触发(仅限 main/dev 分支)
workflow_run:
workflows: ["CI Check"]
types:
- completed
branches: [main, dev]
# 支持 tag 推送(不依赖 CI Check
push:
branches: [main, dev]
tags: ['v*']
# 手动触发
pull_request:
branches: [main, dev]
workflow_dispatch:
inputs:
platforms:
@@ -22,37 +17,9 @@ on:
required: false
default: false
type: boolean
skip_ci_check:
description: '跳过等待 CI Check (手动触发时可选)'
required: false
default: false
type: boolean
jobs:
# 等待 CI Check 完成的 job (仅用于 workflow_run 触发)
wait-for-ci:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_run'
outputs:
should_continue: ${{ steps.check.outputs.should_continue }}
steps:
- name: Check CI status
id: check
run: |
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "should_continue=true" >> $GITHUB_OUTPUT
echo "CI Check passed, proceeding with build"
else
echo "should_continue=false" >> $GITHUB_OUTPUT
echo "CI Check did not succeed (status: ${{ github.event.workflow_run.conclusion }}), skipping build"
fi
build:
needs: [wait-for-ci]
# 运行条件workflow_run 触发且 CI 成功,或者其他触发方式
if: |
always() &&
(needs.wait-for-ci.result == 'skipped' || needs.wait-for-ci.outputs.should_continue == 'true')
strategy:
fail-fast: false
matrix:
@@ -60,7 +27,7 @@ jobs:
- os: ubuntu-latest
platform: linux-64
env_file: unilabos-linux-64.yaml
- os: macos-15 # Intel (via Rosetta)
- os: macos-13 # Intel
platform: osx-64
env_file: unilabos-osx-64.yaml
- os: macos-latest # ARM64
@@ -77,10 +44,8 @@ jobs:
shell: bash -l {0}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
# 如果是 workflow_run 触发,使用触发 CI Check 的 commit
ref: ${{ github.event.workflow_run.head_sha || github.ref }}
fetch-depth: 0
- name: Check if platform should be built
@@ -104,6 +69,7 @@ jobs:
channels: conda-forge,robostack-staging,defaults
channel-priority: strict
activate-environment: build-env
auto-activate-base: false
auto-update-conda: false
show-channel-urls: true
@@ -149,7 +115,7 @@ jobs:
- name: Upload conda package artifacts
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v4
with:
name: conda-package-${{ matrix.platform }}
path: conda-packages-temp

View File

@@ -1,69 +1,32 @@
name: UniLabOS Conda Build
on:
# 在 CI Check 成功后自动触发
workflow_run:
workflows: ["CI Check"]
types: [completed]
branches: [main, dev]
# 标签推送时直接触发(发布版本)
push:
branches: [main, dev]
tags: ['v*']
# 手动触发
pull_request:
branches: [main, dev]
workflow_dispatch:
inputs:
platforms:
description: '选择构建平台 (逗号分隔): linux-64, osx-64, osx-arm64, win-64'
required: false
default: 'linux-64'
build_full:
description: '是否构建 unilabos-full 完整包 (默认只构建 unilabos 基础包)'
required: false
default: false
type: boolean
upload_to_anaconda:
description: '是否上传到Anaconda.org'
required: false
default: false
type: boolean
skip_ci_check:
description: '跳过等待 CI Check (手动触发时可选)'
required: false
default: false
type: boolean
jobs:
# 等待 CI Check 完成的 job (仅用于 workflow_run 触发)
wait-for-ci:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_run'
outputs:
should_continue: ${{ steps.check.outputs.should_continue }}
steps:
- name: Check CI status
id: check
run: |
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "should_continue=true" >> $GITHUB_OUTPUT
echo "CI Check passed, proceeding with build"
else
echo "should_continue=false" >> $GITHUB_OUTPUT
echo "CI Check did not succeed (status: ${{ github.event.workflow_run.conclusion }}), skipping build"
fi
build:
needs: [wait-for-ci]
# 运行条件workflow_run 触发且 CI 成功,或者其他触发方式
if: |
always() &&
(needs.wait-for-ci.result == 'skipped' || needs.wait-for-ci.outputs.should_continue == 'true')
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
platform: linux-64
- os: macos-15 # Intel (via Rosetta)
- os: macos-13 # Intel
platform: osx-64
- os: macos-latest # ARM64
platform: osx-arm64
@@ -77,10 +40,8 @@ jobs:
shell: bash -l {0}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
# 如果是 workflow_run 触发,使用触发 CI Check 的 commit
ref: ${{ github.event.workflow_run.head_sha || github.ref }}
fetch-depth: 0
- name: Check if platform should be built
@@ -104,6 +65,7 @@ jobs:
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: strict
activate-environment: build-env
auto-activate-base: false
auto-update-conda: false
show-channel-urls: true
@@ -119,61 +81,12 @@ jobs:
conda list | grep -E "(rattler-build|anaconda-client)"
echo "Platform: ${{ matrix.platform }}"
echo "OS: ${{ matrix.os }}"
echo "Build full package: ${{ github.event.inputs.build_full || 'false' }}"
echo "Building packages:"
echo " - unilabos-env (environment dependencies)"
echo " - unilabos (with pip package)"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo " - unilabos-full (complete package)"
fi
echo "Building UniLabOS package"
- name: Build unilabos-env (conda environment only, noarch)
- name: Build conda package
if: steps.should_build.outputs.should_build == 'true'
run: |
echo "Building unilabos-env (conda environment dependencies)..."
rattler-build build -r .conda/environment/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge
- name: Upload unilabos-env to Anaconda.org (if enabled)
if: steps.should_build.outputs.should_build == 'true' && github.event.inputs.upload_to_anaconda == 'true'
run: |
echo "Uploading unilabos-env to uni-lab organization..."
for package in $(find ./output -name "unilabos-env*.conda"); do
anaconda -t ${{ secrets.ANACONDA_API_TOKEN }} upload --user uni-lab --force "$package"
done
- name: Build unilabos (with pip package)
if: steps.should_build.outputs.should_build == 'true'
run: |
echo "Building unilabos package..."
# 如果已上传到 Anaconda从 uni-lab channel 获取 unilabos-env否则从本地 output 获取
rattler-build build -r .conda/base/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge --channel ./output
- name: Upload unilabos to Anaconda.org (if enabled)
if: steps.should_build.outputs.should_build == 'true' && github.event.inputs.upload_to_anaconda == 'true'
run: |
echo "Uploading unilabos to uni-lab organization..."
for package in $(find ./output -name "unilabos-0*.conda" -o -name "unilabos-[0-9]*.conda"); do
anaconda -t ${{ secrets.ANACONDA_API_TOKEN }} upload --user uni-lab --force "$package"
done
- name: Build unilabos-full - Only when explicitly requested
if: |
steps.should_build.outputs.should_build == 'true' &&
github.event.inputs.build_full == 'true'
run: |
echo "Building unilabos-full package on ${{ matrix.platform }}..."
rattler-build build -r .conda/full/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge --channel ./output
- name: Upload unilabos-full to Anaconda.org (if enabled)
if: |
steps.should_build.outputs.should_build == 'true' &&
github.event.inputs.build_full == 'true' &&
github.event.inputs.upload_to_anaconda == 'true'
run: |
echo "Uploading unilabos-full to uni-lab organization..."
for package in $(find ./output -name "unilabos-full*.conda"); do
anaconda -t ${{ secrets.ANACONDA_API_TOKEN }} upload --user uni-lab --force "$package"
done
rattler-build build -r .conda/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge
- name: List built packages
if: steps.should_build.outputs.should_build == 'true'
@@ -195,9 +108,17 @@ jobs:
- name: Upload conda package artifacts
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v4
with:
name: conda-package-unilabos-${{ matrix.platform }}
path: conda-packages-temp
if-no-files-found: warn
retention-days: 30
- name: Upload to Anaconda.org (uni-lab organization)
if: github.event.inputs.upload_to_anaconda == 'true'
run: |
for package in $(find ./output -name "*.conda"); do
echo "Uploading $package to uni-lab organization..."
anaconda -t ${{ secrets.ANACONDA_API_TOKEN }} upload --user uni-lab --force "$package"
done

3
.gitignore vendored
View File

@@ -1,11 +1,8 @@
cursor_docs/
configs/
temp/
output/
unilabos_data/
pyrightconfig.json
.cursorignore
device_package*/
## Python
# Byte-compiled / optimized / DLL files

View File

@@ -1,87 +0,0 @@
# AGENTS.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Also follow the monorepo-level rules in `../AGENTS.md`.
## Build & Development
```bash
# Install in editable mode (requires mamba env with python 3.11)
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# Run with a device graph
unilab --graph <graph.json> --config <config.py> --backend ros
unilab --graph <graph.json> --config <config.py> --backend simple # no ROS2 needed
# Common CLI flags
unilab --app_bridges websocket fastapi # communication bridges
unilab --test_mode # simulate hardware, no real execution
unilab --check_mode # CI validation of registry imports
unilab --skip_env_check # skip auto-install of dependencies
unilab --visual rviz|web|disable # visualization mode
unilab --is_slave # run as slave node
# Workflow upload subcommand
unilab workflow_upload -f <workflow.json> -n <name> --tags tag1 tag2
# Tests
pytest tests/ # all tests
pytest tests/resources/test_resourcetreeset.py # single test file
pytest tests/resources/test_resourcetreeset.py::TestClassName::test_method # single test
```
## Architecture
### Startup Flow
`unilab` CLI → `unilabos/app/main.py:main()` → loads config → builds registry → reads device graph (JSON/GraphML) → starts backend thread (ROS2/simple) → starts FastAPI web server + WebSocket client.
### Core Layers
**Registry** (`unilabos/registry/`): Singleton `Registry` class discovers and catalogs all device types, resource types, and communication devices from YAML definitions. Device types live in `registry/devices/*.yaml`, resources in `registry/resources/`, comms in `registry/device_comms/`. The registry resolves class paths to actual Python classes via `utils/import_manager.py`.
**Resource Tracking** (`unilabos/resources/resource_tracker.py`): Pydantic-based `ResourceDict``ResourceDictInstance``ResourceTreeSet` hierarchy. `ResourceTreeSet` is the canonical in-memory representation of all devices and resources, used throughout the system. Graph I/O is in `resources/graphio.py` (reads JSON/GraphML device topology files into `nx.Graph` + `ResourceTreeSet`).
**Device Drivers** (`unilabos/devices/`): 30+ hardware drivers organized by device type (liquid_handling, hplc, balance, arm, etc.). Each driver is a Python class that gets wrapped by `ros/device_node_wrapper.py:ros2_device_node()` to become a ROS2 node with publishers, subscribers, and action servers.
**ROS2 Layer** (`unilabos/ros/`): `device_node_wrapper.py` dynamically wraps any device class into `ROS2DeviceNode` (defined in `ros/nodes/base_device_node.py`). Preset node types in `ros/nodes/presets/` include `host_node`, `controller_node`, `workstation`, `serial_node`, `camera`. Messages use custom `unilabos_msgs` (pre-built, distributed via releases).
**Protocol Compilation** (`unilabos/compile/`): 20+ protocol compilers (add, centrifuge, dissolve, filter, heatchill, stir, pump, etc.) that transform YAML protocol definitions into executable sequences.
**Communication** (`unilabos/device_comms/`): Hardware communication adapters — OPC-UA client, Modbus PLC, RPC, and a universal driver. `app/communication.py` provides a factory pattern for WebSocket client connections to the cloud.
**Web/API** (`unilabos/app/web/`): FastAPI server with REST API (`api.py`), Jinja2 template pages (`pages.py`), and HTTP client for cloud communication (`client.py`). Runs on port 8002 by default.
### Configuration System
- **Config classes** in `unilabos/config/config.py`: `BasicConfig`, `WSConfig`, `HTTPConfig`, `ROSConfig` — all class-level attributes, loaded from Python config files
- Config files are `.py` files with matching class names (see `config/example_config.py`)
- Environment variables override with prefix `UNILABOS_` (e.g., `UNILABOS_BASICCONFIG_PORT=9000`)
- Device topology defined in graph files (JSON with node-link format, or GraphML)
### Key Data Flow
1. Graph file → `graphio.read_node_link_json()``(nx.Graph, ResourceTreeSet, resource_links)`
2. `ResourceTreeSet` + `Registry``initialize_device.initialize_device_from_dict()``ROS2DeviceNode` instances
3. Device nodes communicate via ROS2 topics/actions or direct Python calls (simple backend)
4. Cloud sync via WebSocket (`app/ws_client.py`) and HTTP (`app/web/client.py`)
### Test Data
Example device graphs and experiment configs are in `unilabos/test/experiments/` (not `tests/`). Registry test fixtures in `unilabos/test/registry/`.
## Code Conventions
- Code comments and log messages in simplified Chinese
- Python 3.11+, type hints expected
- Pydantic models for data validation (`resource_tracker.py`)
- Singleton pattern via `@singleton` decorator (`utils/decorator.py`)
- Dynamic class loading via `utils/import_manager.py` — device classes resolved at runtime from registry YAML paths
- CLI argument dashes auto-converted to underscores for consistency
## Licensing
- Framework code: GPL-3.0
- Device drivers (`unilabos/devices/`): DP Technology Proprietary License — do not redistribute

View File

@@ -1,4 +0,0 @@
Please follow the rules defined in:
@AGENTS.md

View File

@@ -1,5 +1,4 @@
recursive-include unilabos/test *
recursive-include unilabos/utils *
recursive-include unilabos/registry *.yaml
recursive-include unilabos/app/web/static *
recursive-include unilabos/app/web/templates *

17
NOTICE
View File

@@ -1,17 +0,0 @@
# Uni-Lab-OS Licensing Notice
This project uses a dual licensing structure:
## 1. Main Framework - GPL-3.0
- unilabos/ (except unilabos/devices/)
- docs/
- tests/
See [LICENSE](LICENSE) for details.
## 2. Device Drivers - DP Technology Proprietary License
- unilabos/devices/
See [unilabos/devices/LICENSE](unilabos/devices/LICENSE) for details.

View File

@@ -8,13 +8,17 @@
**English** | [中文](README_zh.md)
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/blob/main/LICENSE)
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/blob/main/LICENSE)
Uni-Lab-OS is a platform for laboratory automation, designed to connect and control various experimental equipment, enabling automation and standardization of experimental workflows.
## 🏆 Competition
Join the [Intelligent Organic Chemistry Synthesis Competition](https://bohrium.dp.tech/competitions/1451645258) to explore automated synthesis with Uni-Lab-OS!
## Key Features
- Multi-device integration management
@@ -27,89 +31,41 @@ Uni-Lab-OS is a platform for laboratory automation, designed to connect and cont
Detailed documentation can be found at:
- [Online Documentation](https://deepmodeling.github.io/Uni-Lab-OS/)
- [Online Documentation](https://xuwznln.github.io/Uni-Lab-OS-Doc/)
## Quick Start
### 1. Setup Conda Environment
Uni-Lab-OS recommends using `mamba` for environment management. Choose the package that fits your needs:
| Package | Use Case | Contents |
|---------|----------|----------|
| `unilabos` | **Recommended for most users** | Complete package, ready to use |
| `unilabos-env` | Developers (editable install) | Environment only, install unilabos via pip |
| `unilabos-full` | Simulation/Visualization | unilabos + ROS2 Desktop + Gazebo + MoveIt |
Uni-Lab-OS recommends using `mamba` for environment management. Choose the appropriate environment file for your operating system:
```bash
# Create new environment
mamba create -n unilab python=3.11.14
mamba create -n unilab python=3.11.11
mamba activate unilab
# Option A: Standard installation (recommended for most users)
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# Option B: For developers (editable mode development)
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# Then install unilabos and dependencies:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# Option C: Full installation (simulation/visualization)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**When to use which?**
- **unilabos**: Standard installation for production deployment and general usage (recommended)
- **unilabos-env**: For developers who need `pip install -e .` editable mode, modify source code
- **unilabos-full**: For simulation (Gazebo), visualization (rviz2), and Jupyter notebooks
### 2. Clone Repository (Optional, for developers)
## Install Dev Uni-Lab-OS
```bash
# Clone the repository (only needed for development or examples)
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
# Clone the repository
git clone https://github.com/dptech-corp/Uni-Lab-OS.git
cd Uni-Lab-OS
# Install Uni-Lab-OS
pip install .
```
3. Start Uni-Lab System
3. Start Uni-Lab System:
Please refer to [Documentation - Boot Examples](https://deepmodeling.github.io/Uni-Lab-OS/boot_examples/index.html)
4. Best Practice
See [Best Practice Guide](https://deepmodeling.github.io/Uni-Lab-OS/user_guide/best_practice.html)
Please refer to [Documentation - Boot Examples](https://xuwznln.github.io/Uni-Lab-OS-Doc/boot_examples/index.html)
## Message Format
Uni-Lab-OS uses pre-built `unilabos_msgs` for system communication. You can find the built versions on the [GitHub Releases](https://github.com/deepmodeling/Uni-Lab-OS/releases) page.
## Citation
If you use [Uni-Lab-OS](https://arxiv.org/abs/2512.21766) in academic research, please cite:
```bibtex
@article{gao2025unilabos,
title = {UniLabOS: An AI-Native Operating System for Autonomous Laboratories},
doi = {10.48550/arXiv.2512.21766},
publisher = {arXiv},
author = {Gao, Jing and Chang, Junhan and Que, Haohui and Xiong, Yanfei and
Zhang, Shixiang and Qi, Xianwei and Liu, Zhen and Wang, Jun-Jie and
Ding, Qianjun and Li, Xinyu and Pan, Ziwei and Xie, Qiming and
Yan, Zhuang and Yan, Junchi and Zhang, Linfeng},
year = {2025}
}
```
Uni-Lab-OS uses pre-built `unilabos_msgs` for system communication. You can find the built versions on the [GitHub Releases](https://github.com/dptech-corp/Uni-Lab-OS/releases) page.
## License
This project uses a dual licensing structure:
- **Main Framework**: GPL-3.0 - see [LICENSE](LICENSE)
- **Device Drivers** (`unilabos/devices/`): DP Technology Proprietary License
See [NOTICE](NOTICE) for complete licensing details.
This project is licensed under GPL-3.0 - see the [LICENSE](LICENSE) file for details.
## Project Statistics
@@ -121,4 +77,4 @@ See [NOTICE](NOTICE) for complete licensing details.
## Contact Us
- GitHub Issues: [https://github.com/deepmodeling/Uni-Lab-OS/issues](https://github.com/deepmodeling/Uni-Lab-OS/issues)
- GitHub Issues: [https://github.com/dptech-corp/Uni-Lab-OS/issues](https://github.com/dptech-corp/Uni-Lab-OS/issues)

View File

@@ -8,13 +8,17 @@
[English](README.md) | **中文**
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/blob/main/LICENSE)
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/blob/main/LICENSE)
Uni-Lab-OS 是一个用于实验室自动化的综合平台,旨在连接和控制各种实验设备,实现实验流程的自动化和标准化。
## 🏆 比赛
欢迎参加[有机化学合成智能实验大赛](https://bohrium.dp.tech/competitions/1451645258),使用 Uni-Lab-OS 探索自动化合成!
## 核心特点
- 多设备集成管理
@@ -27,89 +31,43 @@ Uni-Lab-OS 是一个用于实验室自动化的综合平台,旨在连接和控
详细文档可在以下位置找到:
- [在线文档](https://deepmodeling.github.io/Uni-Lab-OS/)
- [在线文档](https://xuwznln.github.io/Uni-Lab-OS-Doc/)
## 快速开始
### 1. 配置 Conda 环境
1. 配置 Conda 环境
Uni-Lab-OS 建议使用 `mamba` 管理环境。根据您的需求选择合适的安装包:
| 安装包 | 适用场景 | 包含内容 |
|--------|----------|----------|
| `unilabos` | **推荐大多数用户** | 完整安装包,开箱即用 |
| `unilabos-env` | 开发者(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos |
| `unilabos-full` | 仿真/可视化 | unilabos + ROS2 桌面版 + Gazebo + MoveIt |
Uni-Lab-OS 建议使用 `mamba` 管理环境。根据您的操作系统选择适当的环境文件:
```bash
# 创建新环境
mamba create -n unilab python=3.11.14
mamba create -n unilab python=3.11.11
mamba activate unilab
# 方案 A标准安装推荐大多数用户
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后安装 unilabos 和依赖:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# 方案 C完整安装仿真/可视化)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**如何选择?**
- **unilabos**:标准安装,适用于生产部署和日常使用(推荐)
- **unilabos-env**:开发者使用,支持 `pip install -e .` 可编辑模式,可修改源代码
- **unilabos-full**需要仿真Gazebo、可视化rviz2或 Jupyter Notebook
### 2. 克隆仓库(可选,供开发者使用)
2. 安装开发版 Uni-Lab-OS:
```bash
# 克隆仓库(仅开发或查看示例时需要)
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
# 克隆仓库
git clone https://github.com/dptech-corp/Uni-Lab-OS.git
cd Uni-Lab-OS
# 安装 Uni-Lab-OS
pip install .
```
3. 启动 Uni-Lab 系统
3. 启动 Uni-Lab 系统:
请见[文档-启动样例](https://deepmodeling.github.io/Uni-Lab-OS/boot_examples/index.html)
4. 最佳实践
请见[最佳实践指南](https://deepmodeling.github.io/Uni-Lab-OS/user_guide/best_practice.html)
请见[文档-启动样例](https://xuwznln.github.io/Uni-Lab-OS-Doc/boot_examples/index.html)
## 消息格式
Uni-Lab-OS 使用预构建的 `unilabos_msgs` 进行系统通信。您可以在 [GitHub Releases](https://github.com/deepmodeling/Uni-Lab-OS/releases) 页面找到已构建的版本。
## 引用
如果您在学术研究中使用 [Uni-Lab-OS](https://arxiv.org/abs/2512.21766),请引用:
```bibtex
@article{gao2025unilabos,
title = {UniLabOS: An AI-Native Operating System for Autonomous Laboratories},
doi = {10.48550/arXiv.2512.21766},
publisher = {arXiv},
author = {Gao, Jing and Chang, Junhan and Que, Haohui and Xiong, Yanfei and
Zhang, Shixiang and Qi, Xianwei and Liu, Zhen and Wang, Jun-Jie and
Ding, Qianjun and Li, Xinyu and Pan, Ziwei and Xie, Qiming and
Yan, Zhuang and Yan, Junchi and Zhang, Linfeng},
year = {2025}
}
```
Uni-Lab-OS 使用预构建的 `unilabos_msgs` 进行系统通信。您可以在 [GitHub Releases](https://github.com/dptech-corp/Uni-Lab-OS/releases) 页面找到已构建的版本。
## 许可证
项目采用双许可证结构:
- **主框架**GPL-3.0 - 详见 [LICENSE](LICENSE)
- **设备驱动** (`unilabos/devices/`):深势科技专有许可证
完整许可证说明请参阅 [NOTICE](NOTICE)。
项目采用 GPL-3.0 许可 - 详情请参阅 [LICENSE](LICENSE) 文件。
## 项目统计
@@ -121,4 +79,4 @@ Uni-Lab-OS 使用预构建的 `unilabos_msgs` 进行系统通信。您可以在
## 联系我们
- GitHub Issues: [https://github.com/deepmodeling/Uni-Lab-OS/issues](https://github.com/deepmodeling/Uni-Lab-OS/issues)
- GitHub Issues: [https://github.com/dptech-corp/Uni-Lab-OS/issues](https://github.com/dptech-corp/Uni-Lab-OS/issues)

View File

@@ -24,7 +24,7 @@ extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.napoleon", # 如果您使用 Google 或 NumPy 风格的 docstrings
"sphinx_rtd_theme",
"sphinxcontrib.mermaid",
"sphinxcontrib.mermaid"
]
source_suffix = {
@@ -58,7 +58,7 @@ html_theme = "sphinx_rtd_theme"
# sphinx-book-theme 主题选项
html_theme_options = {
"repository_url": "https://github.com/deepmodeling/Uni-Lab-OS",
"repository_url": "https://github.com/用户名/Uni-Lab",
"use_repository_button": True,
"use_issues_button": True,
"use_edit_page_button": True,

File diff suppressed because it is too large Load Diff

View File

@@ -15,9 +15,6 @@ Python 类设备驱动在完成注册表后可以直接在 Uni-Lab 中使用,
**示例:**
```python
from unilabos.registry.decorators import device, topic_config
@device(id="mock_gripper", category=["gripper"], description="Mock Gripper")
class MockGripper:
def __init__(self):
self._position: float = 0.0
@@ -26,23 +23,19 @@ class MockGripper:
self._status = "Idle"
@property
@topic_config() # 添加 @topic_config 才会定时广播
def position(self) -> float:
return self._position
@property
@topic_config()
def velocity(self) -> float:
return self._velocity
@property
@topic_config()
def torque(self) -> float:
return self._torque
# 使用 @topic_config 装饰的属性,接入 Uni-Lab 时会定时对外广播
# 会被自动识别的设备属性,接入 Uni-Lab 时会定时对外广播
@property
@topic_config(period=2.0) # 可自定义发布周期
def status(self) -> str:
return self._status
@@ -156,7 +149,7 @@ my_device: # 设备唯一标识符
系统会自动分析您的 Python 驱动类并生成:
- `status_types`:从 `@topic_config` 装饰的 `@property` 方法自动识别状态属性
- `status_types`:从 `@property` 装饰的方法自动识别状态属性
- `action_value_mappings`:从类方法自动生成动作映射
- `init_param_schema`:从 `__init__` 方法分析初始化参数
- `schema`:前端显示用的属性类型定义
@@ -186,9 +179,7 @@ Uni-Lab 设备驱动是一个 Python 类,需要遵循以下结构:
```python
from typing import Dict, Any
from unilabos.registry.decorators import device, topic_config
@device(id="my_device", category=["general"], description="My Device")
class MyDevice:
"""设备类文档字符串
@@ -207,9 +198,8 @@ class MyDevice:
# 初始化硬件连接
@property
@topic_config() # 必须添加 @topic_config 才会广播
def status(self) -> str:
"""设备状态(通过 @topic_config 广播)"""
"""设备状态(会自动广播)"""
return self._status
def my_action(self, param: float) -> Dict[str, Any]:
@@ -227,61 +217,34 @@ class MyDevice:
## 状态属性 vs 动作方法
### 状态属性(@property + @topic_config
### 状态属性(@property
状态属性需要同时使用 `@property``@topic_config` 装饰器才会被识别并定期广播:
状态属性会被自动识别并定期广播:
```python
from unilabos.registry.decorators import topic_config
@property
@topic_config() # 必须添加,否则不会广播
def temperature(self) -> float:
"""当前温度"""
return self._read_temperature()
@property
@topic_config(period=2.0) # 可自定义发布周期(秒)
def status(self) -> str:
"""设备状态: idle, running, error"""
return self._status
@property
@topic_config(name="ready") # 可自定义发布名称
def is_ready(self) -> bool:
"""设备是否就绪"""
return self._status == "idle"
```
也可以使用普通方法(非 @property)配合 `@topic_config`
```python
@topic_config(period=10.0)
def get_sensor_data(self) -> Dict[str, float]:
"""获取传感器数据get_ 前缀会自动去除,发布名为 sensor_data"""
return {"temp": self._temp, "humidity": self._humidity}
```
**`@topic_config` 参数**:
| 参数 | 类型 | 默认值 | 说明 |
|------|------|--------|------|
| `period` | float | 5.0 | 发布周期(秒) |
| `print_publish` | bool | 节点默认 | 是否打印发布日志 |
| `qos` | int | 10 | QoS 深度 |
| `name` | str | None | 自定义发布名称 |
**发布名称优先级**`@topic_config(name=...)` > `get_` 前缀去除 > 方法名
**特点**:
- 必须使用 `@topic_config` 装饰器
- 支持 `@property` 和普通方法
- 添加到注册表的 `status_types`
- 使用`@property`装饰器
- 只读,不能有参数
- 自动添加到注册表的`status_types`
- 定期发布到 ROS2 topic
> **⚠️ 重要:** 仅有 `@property` 装饰器而没有 `@topic_config` 的属性**不会**被广播。这是一个 Breaking Change。
### 动作方法
动作方法是设备可以执行的操作:
@@ -534,7 +497,6 @@ class LiquidHandler:
self._status = "idle"
@property
@topic_config()
def status(self) -> str:
return self._status
@@ -924,52 +886,7 @@ class MyDevice:
## 最佳实践
### 1. 使用 `@device` 装饰器标识设备
```python
from unilabos.registry.decorators import device
@device(id="my_device", category=["heating"], description="My Heating Device", icon="heater.webp")
class MyDevice:
...
```
- `id`:设备唯一标识符,用于注册表匹配
- `category`:分类列表,前端用于分组显示
- `description`:设备描述
- `icon`:图标文件名(可选)
### 2. 使用 `@topic_config` 声明需要广播的状态
```python
from unilabos.registry.decorators import topic_config
# ✓ @property + @topic_config → 会广播
@property
@topic_config(period=2.0)
def temperature(self) -> float:
return self._temp
# ✓ 普通方法 + @topic_config → 会广播get_ 前缀自动去除)
@topic_config(period=10.0)
def get_sensor_data(self) -> Dict[str, float]:
return {"temp": self._temp}
# ✓ 使用 name 参数自定义发布名称
@property
@topic_config(name="ready")
def is_ready(self) -> bool:
return self._status == "idle"
# ✗ 仅有 @property没有 @topic_config → 不会广播
@property
def internal_state(self) -> str:
return self._state
```
> **注意:** 与 `@property` 连用时,`@topic_config` 必须放在 `@property` 下面。
### 3. 类型注解
### 1. 类型注解
```python
from typing import Dict, Any, Optional, List
@@ -984,7 +901,7 @@ def method(
pass
```
### 4. 文档字符串
### 2. 文档字符串
```python
def method(self, param: float) -> Dict[str, Any]:
@@ -1006,7 +923,7 @@ def method(self, param: float) -> Dict[str, Any]:
pass
```
### 5. 配置验证
### 3. 配置验证
```python
def __init__(self, config: Dict[str, Any]):
@@ -1020,7 +937,7 @@ def __init__(self, config: Dict[str, Any]):
self.baudrate = config['baudrate']
```
### 6. 资源清理
### 4. 资源清理
```python
def __del__(self):
@@ -1029,7 +946,7 @@ def __del__(self):
self.connection.close()
```
### 7. 设计前端友好的返回值
### 5. 设计前端友好的返回值
**记住:返回值会直接显示在 Web 界面**

View File

@@ -422,20 +422,18 @@ placeholder_keys:
### status_types
系统会扫描你的 Python 类,从带有 `@topic_config` 装饰器的 `@property`方法自动生成这部分:
系统会扫描你的 Python 类,从状态方法property 或 get\_方法自动生成这部分:
```yaml
status_types:
current_temperature: float # 从 @topic_config 装饰的 @property 或方法
is_heating: bool
status: str
current_temperature: float # 从 get_current_temperature() 或 @property current_temperature
is_heating: bool # 从 get_is_heating() 或 @property is_heating
status: str # 从 get_status() 或 @property status
```
**注意事项**
- 仅有带 `@topic_config` 装饰器的 `@property` 或方法才会被识别为状态属性
- 没有 `@topic_config``@property` 不会生成 status_types也不会广播
- `get_` 前缀的方法名会自动去除前缀(如 `get_temperature``temperature`
- 系统会查找所有 `get_` 开头的方法和 `@property` 装饰的属性
- 类型会自动转成相应的类型(如 `str``float``bool`
- 如果类型是 `Any``None` 或未知的,默认使用 `String`
@@ -539,13 +537,11 @@ class AdvancedLiquidHandler:
self._temperature = 25.0
@property
@topic_config()
def status(self) -> str:
"""设备状态"""
return self._status
@property
@topic_config()
def temperature(self) -> float:
"""当前温度"""
return self._temperature
@@ -813,23 +809,21 @@ my_temperature_controller:
你的设备类需要符合以下要求:
```python
from unilabos.registry.decorators import device, topic_config
from unilabos.common.device_base import DeviceBase
@device(id="my_device", category=["temperature"], description="My Device")
class MyDevice:
class MyDevice(DeviceBase):
def __init__(self, config):
"""初始化,参数会自动分析到 init_param_schema.config"""
super().__init__(config)
self.port = config.get('port', '/dev/ttyUSB0')
# 状态方法(必须添加 @topic_config 才会生成到 status_types 并广播
# 状态方法(会自动生成到 status_types
@property
@topic_config()
def status(self):
"""返回设备状态"""
return "idle"
@property
@topic_config()
def temperature(self):
"""返回当前温度"""
return 25.0
@@ -1045,34 +1039,7 @@ resource.type # "resource"
### 代码规范
1. **使用 `@device` 装饰器标识设备类**
```python
from unilabos.registry.decorators import device
@device(id="my_device", category=["heating"], description="My Device")
class MyDevice:
...
```
2. **使用 `@topic_config` 声明广播属性**
```python
from unilabos.registry.decorators import topic_config
# ✓ 需要广播的状态属性
@property
@topic_config(period=2.0)
def temperature(self) -> float:
return self._temp
# ✗ 仅有 @property 不会广播
@property
def internal_counter(self) -> int:
return self._counter
```
3. **始终使用类型注解**
1. **始终使用类型注解**
```python
# ✓ 好
@@ -1084,7 +1051,7 @@ def method(self, resource, device):
pass
```
4. **提供有意义的参数名**
2. **提供有意义的参数名**
```python
# ✓ 好 - 清晰的参数名
@@ -1096,7 +1063,7 @@ def transfer(self, r1: ResourceSlot, r2: ResourceSlot):
pass
```
5. **使用 Optional 表示可选参数**
3. **使用 Optional 表示可选参数**
```python
from typing import Optional
@@ -1109,7 +1076,7 @@ def method(
pass
```
6. **添加详细的文档字符串**
4. **添加详细的文档字符串**
```python
def method(
@@ -1129,13 +1096,13 @@ def method(
pass
```
7. **方法命名规范**
5. **方法命名规范**
- 状态方法使用 `@property` + `@topic_config` 装饰器,或普通方法 + `@topic_config`
- 状态方法使用 `@property` 装饰器或 `get_` 前缀
- 动作方法使用动词开头
- 保持命名清晰、一致
8. **完善的错误处理**
6. **完善的错误处理**
- 实现完善的错误处理
- 添加日志记录
- 提供有意义的错误信息

View File

@@ -221,10 +221,10 @@ Laboratory A Laboratory B
```bash
# 实验室A
unilab --ak your_ak --sk your_sk --upload_registry
unilab --ak your_ak --sk your_sk --upload_registry --use_remote_resource
# 实验室B
unilab --ak your_ak --sk your_sk --upload_registry
unilab --ak your_ak --sk your_sk --upload_registry --use_remote_resource
```
---

View File

@@ -12,7 +12,3 @@ sphinx-copybutton>=0.5.0
# 用于自动摘要生成
sphinx-autobuild>=2024.2.4
# 用于PDF导出 (rinohtype方案纯Python无需LaTeX)
rinohtype>=0.5.4
sphinx-simplepdf>=1.6.0

View File

@@ -31,14 +31,6 @@
详细的安装步骤请参考 [安装指南](installation.md)。
**选择合适的安装包:**
| 安装包 | 适用场景 | 包含组件 |
|--------|----------|----------|
| `unilabos` | **推荐大多数用户**,生产部署 | 完整安装包,开箱即用 |
| `unilabos-env` | 开发者(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos |
| `unilabos-full` | 仿真/可视化 | unilabos + 完整 ROS2 桌面版 + Gazebo + MoveIt |
**关键步骤:**
```bash
@@ -46,30 +38,15 @@
# 下载 Miniforge: https://github.com/conda-forge/miniforge/releases
# 2. 创建 Conda 环境
mamba create -n unilab python=3.11.14
mamba create -n unilab python=3.11.11
# 3. 激活环境
mamba activate unilab
# 4. 安装 Uni-Lab-OS(选择其一)
# 方案 A标准安装推荐大多数用户
# 4. 安装 Uni-Lab-OS
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
pip install -e /path/to/Uni-Lab-OS # 可编辑安装
uv pip install -r unilabos/utils/requirements.txt # 安装 pip 依赖
# 方案 C完整版仿真/可视化)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
```
**选择建议:**
- **日常使用/生产部署**:使用 `unilabos`(推荐),完整功能,开箱即用
- **开发者**:使用 `unilabos-env` + `pip install -e .` + `uv pip install -r unilabos/utils/requirements.txt`,代码修改立即生效
- **仿真/可视化**:使用 `unilabos-full`,含 Gazebo、rviz2、MoveIt
#### 1.2 验证安装
```bash
@@ -439,9 +416,6 @@ unilab --ak your_ak --sk your_sk -g test/experiments/mock_devices/mock_all.json
1. 访问 Web 界面,进入"仪器耗材"模块
2. 在"仪器设备"区域找到并添加上述设备
3. 在"物料耗材"区域找到并添加容器
4. 在workstation中配置protocol_type包含PumpTransferProtocol
![添加Protocol类型](image/add_protocol.png)
![物料列表](image/material.png)
@@ -452,9 +426,8 @@ unilab --ak your_ak --sk your_sk -g test/experiments/mock_devices/mock_all.json
**操作步骤:**
1. 将两个 `container` 拖拽到 `workstation`
2.`virtual_multiway_valve` 拖拽到 `workstation`
3. `virtual_transfer_pump` 拖拽到 `workstation`
4. 在画布上连接它们(建立父子关系)
2.`virtual_transfer_pump` 拖拽到 `workstation`
3. 在画布上连接它们(建立父子关系)
![设备连接](image/links.png)
@@ -795,43 +768,7 @@ Waiting for host service...
详细的设备驱动编写指南请参考 [添加设备驱动](../developer_guide/add_device.md)。
#### 9.1 开发环境准备
**推荐使用 `unilabos-env` + `pip install -e .` + `uv pip install`** 进行设备开发:
```bash
# 1. 创建环境并安装 unilabos-envROS2 + conda 依赖 + uv
mamba create -n unilab python=3.11.14
conda activate unilab
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 2. 克隆代码
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 3. 以可编辑模式安装(推荐使用脚本,自动检测中文环境)
python scripts/dev_install.py
# 或手动安装:
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
```
**为什么使用这种方式?**
- `unilabos-env` 提供 ROS2 核心组件和 uv通过 conda 安装,避免编译)
- `unilabos/utils/requirements.txt` 包含所有运行时需要的 pip 依赖
- `dev_install.py` 自动检测中文环境,中文系统自动使用清华镜像
- 使用 `uv` 替代 `pip`,安装速度更快
- 可编辑模式:代码修改**立即生效**,无需重新安装
**如果安装失败或速度太慢**,可以手动执行(使用清华镜像):
```bash
pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
#### 9.2 为什么需要自定义设备?
#### 9.1 为什么需要自定义设备?
Uni-Lab-OS 内置了常见设备,但您的实验室可能有特殊设备需要集成:
@@ -840,7 +777,7 @@ Uni-Lab-OS 内置了常见设备,但您的实验室可能有特殊设备需要
- 特殊的实验流程
- 第三方设备集成
#### 9.3 创建 Python 包
#### 9.2 创建 Python 包
为了方便开发和管理,建议为您的实验室创建独立的 Python 包。
@@ -877,7 +814,7 @@ touch my_lab_devices/my_lab_devices/__init__.py
touch my_lab_devices/my_lab_devices/devices/__init__.py
```
#### 9.4 创建 setup.py
#### 9.3 创建 setup.py
```python
# my_lab_devices/setup.py
@@ -908,7 +845,7 @@ setup(
)
```
#### 9.5 开发安装
#### 9.4 开发安装
使用 `-e` 参数进行可编辑安装,这样代码修改后立即生效:
@@ -923,7 +860,7 @@ pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
- 方便调试和测试
- 支持版本控制git
#### 9.6 编写设备驱动
#### 9.5 编写设备驱动
创建设备驱动文件:
@@ -1064,7 +1001,7 @@ class MyPump:
- **返回 Dict**:所有动作方法返回字典类型
- **文档字符串**:详细说明参数和功能
#### 9.7 测试设备驱动
#### 9.6 测试设备驱动
创建简单的测试脚本:
@@ -1870,7 +1807,7 @@ unilab --ak your_ak --sk your_sk -g graph.json \
#### 14.5 社区支持
- **GitHub Issues**[https://github.com/deepmodeling/Uni-Lab-OS/issues](https://github.com/deepmodeling/Uni-Lab-OS/issues)
- **GitHub Issues**[https://github.com/dptech-corp/Uni-Lab-OS/issues](https://github.com/dptech-corp/Uni-Lab-OS/issues)
- **官方网站**[https://uni-lab.bohrium.com](https://uni-lab.bohrium.com)
---

View File

@@ -463,7 +463,7 @@ Uni-Lab 使用 `ResourceDictInstance.get_resource_instance_from_dict()` 方法
### 使用示例
```python
from unilabos.resources.resource_tracker import ResourceDictInstance
from unilabos.ros.nodes.resource_tracker import ResourceDictInstance
# 旧格式节点
old_format_node = {
@@ -477,10 +477,10 @@ old_format_node = {
instance = ResourceDictInstance.get_resource_instance_from_dict(old_format_node)
# 访问标准化后的数据
print(instance.res_content.id) # "pump_1"
print(instance.res_content.uuid) # 自动生成的 UUID
print(instance.res_content.id) # "pump_1"
print(instance.res_content.uuid) # 自动生成的 UUID
print(instance.res_content.config) # {}
print(instance.res_content.data) # {}
print(instance.res_content.data) # {}
```
### 格式迁移建议
@@ -857,4 +857,4 @@ class ResourceDictPosition(BaseModel):
- 在 Web 界面中使用模板创建
- 参考示例文件:`test/experiments/` 目录
- 查看 ResourceDict 源码了解完整定义
- [GitHub 讨论区](https://github.com/deepmodeling/Uni-Lab-OS/discussions)
- [GitHub 讨论区](https://github.com/dptech-corp/Uni-Lab-OS/discussions)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 415 KiB

After

Width:  |  Height:  |  Size: 275 KiB

View File

@@ -13,26 +13,15 @@
- 开发者需要 Git 和基本的 Python 开发知识
- 自定义 msgs 需要 GitHub 账号
## 安装包选择
Uni-Lab-OS 提供三个安装包版本,根据您的需求选择:
| 安装包 | 适用场景 | 包含组件 | 磁盘占用 |
|--------|----------|----------|----------|
| **unilabos** | **推荐大多数用户**,生产部署 | 完整安装包,开箱即用 | ~2-3 GB |
| **unilabos-env** | 开发者环境(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos | ~2 GB |
| **unilabos-full** | 仿真可视化、完整功能体验 | unilabos + 完整 ROS2 桌面版 + Gazebo + MoveIt | ~8-10 GB |
## 安装方式选择
根据您的使用场景,选择合适的安装方式:
| 安装方式 | 适用人群 | 推荐安装包 | 特点 | 安装时间 |
| ---------------------- | -------------------- | ----------------- | ------------------------------ | ---------------------------- |
| **方式一:一键安装** | 快速体验、演示 | 预打包环境 | 离线可用,无需配置 | 5-10 分钟 (网络良好的情况下) |
| **方式二:手动安装** | **大多数用户** | `unilabos` | 完整功能,开箱即用 | 10-20 分钟 |
| **方式三:开发者安装** | 开发者、需要修改源码 | `unilabos-env` | 可编辑模式,支持自定义开发 | 20-30 分钟 |
| **仿真/可视化** | 仿真测试、可视化调试 | `unilabos-full` | 含 Gazebo、rviz2、MoveIt | 30-60 分钟 |
| 安装方式 | 适用人群 | 特点 | 安装时间 |
| ---------------------- | -------------------- | ------------------------------ | ---------------------------- |
| **方式一:一键安装** | 实验室用户、快速体验 | 预打包环境,离线可用,无需配置 | 5-10 分钟 (网络良好的情况下) |
| **方式二:手动安装** | 标准用户、生产环境 | 灵活配置,版本可控 | 10-20 分钟 |
| **方式三:开发者安装** | 开发者、需要修改源码 | 可编辑模式,支持自定义 msgs | 20-30 分钟 |
---
@@ -48,7 +37,7 @@ Uni-Lab-OS 提供三个安装包版本,根据您的需求选择:
#### 第一步:下载预打包环境
1. 访问 [GitHub Actions - Conda Pack Build](https://github.com/deepmodeling/Uni-Lab-OS/actions/workflows/conda-pack-build.yml)
1. 访问 [GitHub Actions - Conda Pack Build](https://github.com/dptech-corp/Uni-Lab-OS/actions/workflows/conda-pack-build.yml)
2. 选择最新的成功构建记录(绿色勾号 ✓)
@@ -155,38 +144,17 @@ bash Miniforge3-$(uname)-$(uname -m).sh
使用以下命令创建 Uni-Lab 专用环境:
```bash
mamba create -n unilab python=3.11.14 # 目前ros2组件依赖版本大多为3.11.14
mamba create -n unilab python=3.11.11 # 目前ros2组件依赖版本大多为3.11.11
mamba activate unilab
# 选择安装包(三选一):
# 方案 A标准安装推荐大多数用户
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后安装 unilabos 和 pip 依赖:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# 方案 C完整版含仿真和可视化工具
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**参数说明**:
- `-n unilab`: 创建名为 "unilab" 的环境
- `uni-lab::unilabos`: 安装 unilabos 完整包,开箱即用(推荐)
- `uni-lab::unilabos-env`: 仅安装环境依赖,适合开发者使用 `pip install -e .`
- `uni-lab::unilabos-full`: 安装完整包(含 ROS2 Desktop、Gazebo、MoveIt 等)
- `uni-lab::unilabos`: 从 uni-lab channel 安装 unilabos 包
- `-c robostack-staging -c conda-forge`: 添加额外的软件源
**包选择建议**
- **日常使用/生产部署**:安装 `unilabos`(推荐,完整功能,开箱即用)
- **开发者**:安装 `unilabos-env`,然后使用 `uv pip install -r unilabos/utils/requirements.txt` 安装依赖,再 `pip install -e .` 进行可编辑安装
- **仿真/可视化**:安装 `unilabos-full`Gazebo、rviz2、MoveIt
**如果遇到网络问题**,可以使用清华镜像源加速下载:
```bash
@@ -195,14 +163,8 @@ mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/m
mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
# 然后重新执行安装命令(推荐标准安装)
# 然后重新执行安装命令
mamba create -n unilab uni-lab::unilabos -c robostack-staging
# 或完整版(仿真/可视化)
mamba create -n unilab uni-lab::unilabos-full -c robostack-staging
# pip 安装时使用清华镜像(开发者安装时使用)
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
### 第三步:激活环境
@@ -227,13 +189,13 @@ conda activate unilab
### 第一步:克隆仓库
```bash
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
git clone https://github.com/dptech-corp/Uni-Lab-OS.git
cd Uni-Lab-OS
```
如果您需要贡献代码,建议先 Fork 仓库:
1. 访问 https://github.com/deepmodeling/Uni-Lab-OS
1. 访问 https://github.com/dptech-corp/Uni-Lab-OS
2. 点击右上角的 "Fork" 按钮
3. Clone 您的 Fork 版本:
```bash
@@ -241,87 +203,58 @@ cd Uni-Lab-OS
cd Uni-Lab-OS
```
### 第二步:安装开发环境unilabos-env
### 第二步:安装基础环境
**重要**:开发者请使用 `unilabos-env` 包,它专为开发者设计:
- 包含 ROS2 核心组件和消息包ros-humble-ros-core、std-msgs、geometry-msgs 等)
- 包含 transforms3d、cv-bridge、tf2 等 conda 依赖
- 包含 `uv` 工具,用于快速安装 pip 依赖
- **不包含** pip 依赖和 unilabos 包(由 `pip install -e .` 和 `uv pip install` 安装)
**推荐方式**:先通过**方式一(一键安装)**或**方式二(手动安装)**完成基础环境的安装这将包含所有必需的依赖项ROS2、msgs 等)。
#### 选项 A通过一键安装推荐
参考上文"方式一:一键安装",完成基础环境的安装后,激活环境:
```bash
# 创建并激活环境
mamba create -n unilab python=3.11.14
conda activate unilab
# 安装开发者环境包ROS2 + conda 依赖 + uv
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
```
### 第三步:安装 pip 依赖和可编辑模式安装
#### 选项 B通过手动安装
克隆代码并安装依赖
参考上文"方式二:手动安装",创建并安装环境
```bash
mamba create -n unilab python=3.11.11
conda activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**说明**:这会安装包括 Python 3.11.11、ROS2 Humble、ros-humble-unilabos-msgs 和所有必需依赖
### 第三步:切换到开发版本
现在你已经有了一个完整可用的 Uni-Lab 环境,接下来将 unilabos 包切换为开发版本:
```bash
# 确保环境已激活
conda activate unilab
# 克隆仓库(如果还未克隆
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 卸载 pip 安装的 unilabos保留所有 conda 依赖
pip uninstall unilabos -y
# 切换到 dev 分支(可选
# 克隆 dev 分支(如果还未克隆
cd /path/to/your/workspace
git clone -b dev https://github.com/dptech-corp/Uni-Lab-OS.git
# 或者如果已经克隆,切换到 dev 分支
cd Uni-Lab-OS
git checkout dev
git pull
```
**推荐:使用安装脚本**(自动检测中文环境,使用 uv 加速):
```bash
# 自动检测中文环境,如果是中文系统则使用清华镜像
python scripts/dev_install.py
# 或者手动指定:
python scripts/dev_install.py --china # 强制使用清华镜像
python scripts/dev_install.py --no-mirror # 强制使用 PyPI
python scripts/dev_install.py --skip-deps # 跳过 pip 依赖安装
python scripts/dev_install.py --use-pip # 使用 pip 而非 uv
```
**手动安装**(如果脚本安装失败或速度太慢):
```bash
# 1. 安装 unilabos可编辑模式
pip install -e .
# 2. 使用 uv 安装 pip 依赖(推荐,速度更快)
uv pip install -r unilabos/utils/requirements.txt
# 国内用户使用清华镜像:
# 以可编辑模式安装开发版 unilabos
pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
**注意**
- `uv` 已包含在 `unilabos-env` 中,无需单独安装
- `unilabos/utils/requirements.txt` 包含运行 unilabos 所需的所有 pip 依赖
- 部分特殊包(如 pylabrobot会在运行时由 unilabos 自动检测并安装
**参数说明**
**为什么使用可编辑模式?**
- `-e` (editable mode):代码修改**立即生效**,无需重新安装
- 适合开发调试:修改代码后直接运行测试
- 与 `unilabos-env` 配合:环境依赖由 conda 管理unilabos 代码由 pip 管理
**验证安装**
```bash
# 检查 unilabos 版本
python -c "import unilabos; print(unilabos.__version__)"
# 检查安装位置(应该指向你的代码目录)
pip show unilabos | grep Location
```
- `-e`: editable mode可编辑模式代码修改立即生效无需重新安装
- `-i`: 使用清华镜像源加速下载
- `pip uninstall unilabos`: 只卸载 pip 安装的 unilabos 包,不影响 conda 安装的其他依赖(如 ROS2、msgs 等)
### 第四步:安装或自定义 ros-humble-unilabos-msgs可选
@@ -531,45 +464,7 @@ cd $CONDA_PREFIX/envs/unilab
### 问题 8: 环境很大,有办法减小吗?
**解决方案**:
1. **使用 `unilabos` 标准版**(推荐大多数用户):
```bash
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
```
标准版包含完整功能,环境大小约 2-3GB相比完整版的 8-10GB
2. **使用 `unilabos-env` 开发者版**(最小化):
```bash
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后手动安装依赖
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
```
开发者版只包含环境依赖,体积最小约 2GB。
3. **按需安装额外组件**
如果后续需要特定功能,可以单独安装:
```bash
# 需要 Jupyter
mamba install jupyter jupyros
# 需要可视化
mamba install matplotlib opencv
# 需要仿真(注意:这会安装大量依赖)
mamba install ros-humble-gazebo-ros
```
4. **预打包环境问题**
预打包环境(方式一)包含所有依赖,通常较大(压缩后 2-5GB。这是为了确保离线安装和完整功能。
**包选择建议**
| 需求 | 推荐包 | 预估大小 |
|------|--------|----------|
| 日常使用/生产部署 | `unilabos` | ~2-3 GB |
| 开发调试(可编辑模式) | `unilabos-env` | ~2 GB |
| 仿真/可视化 | `unilabos-full` | ~8-10 GB |
**解决方案**: 预打包的环境包含所有依赖,通常较大(压缩后 2-5GB。这是为了确保离线安装和完整功能。如果空间有限考虑使用方式二手动安装只安装需要的组件。
### 问题 9: 如何更新到最新版本?
@@ -608,15 +503,14 @@ mamba update ros-humble-unilabos-msgs -c uni-lab -c robostack-staging -c conda-f
## 需要帮助?
- **故障排查**: 查看更详细的故障排查信息
- **GitHub Issues**: [报告问题](https://github.com/deepmodeling/Uni-Lab-OS/issues)
- **GitHub Issues**: [报告问题](https://github.com/dptech-corp/Uni-Lab-OS/issues)
- **开发者文档**: 查看开发者指南获取更多技术细节
- **社区讨论**: [GitHub Discussions](https://github.com/deepmodeling/Uni-Lab-OS/discussions)
- **社区讨论**: [GitHub Discussions](https://github.com/dptech-corp/Uni-Lab-OS/discussions)
---
**提示**:
- **大多数用户**推荐使用方式二(手动安装)的 `unilabos` 标准版
- **开发者**推荐使用方式三(开发者安装),安装 `unilabos-env` 后使用 `uv pip install -r unilabos/utils/requirements.txt` 安装依赖
- **仿真/可视化**推荐安装 `unilabos-full` 完整版
- **快速体验和演示**推荐使用方式一(一键安装)
- 生产环境推荐使用方式二(手动安装)的稳定版本
- 开发和测试推荐使用方式三(开发者安装)
- 快速体验和演示推荐使用方式一(一键安装)

View File

@@ -22,6 +22,7 @@ options:
--is_slave Run the backend as slave node (without host privileges).
--slave_no_host Skip waiting for host service in slave mode
--upload_registry Upload registry information when starting unilab
--use_remote_resource Use remote resources when starting unilab
--config CONFIG Configuration file path, supports .py format Python config files
--port PORT Port for web service information page
--disable_browser Disable opening information page on startup
@@ -84,7 +85,7 @@ Uni-Lab 的启动过程分为以下几个阶段:
支持两种方式:
- **本地文件**:使用 `-g` 指定图谱文件(支持 JSON 和 GraphML 格式)
- **远程资源**不指定本地文件即可
- **远程资源**使用 `--use_remote_resource` 从云端获取
### 7. 注册表构建
@@ -195,7 +196,7 @@ unilab --config path/to/your/config.py
unilab --ak your_ak --sk your_sk -g path/to/graph.json --upload_registry
# 使用远程资源启动
unilab --ak your_ak --sk your_sk
unilab --ak your_ak --sk your_sk --use_remote_resource
# 更新注册表
unilab --ak your_ak --sk your_sk --complete_registry

View File

@@ -1,6 +1,6 @@
package:
name: ros-humble-unilabos-msgs
version: 0.10.19
version: 0.10.13
source:
path: ../../unilabos_msgs
target_directory: src
@@ -17,7 +17,7 @@ build:
- bash $SRC_DIR/build_ament_cmake.sh
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
repository: https://github.com/dptech-corp/Uni-Lab-OS
license: BSD-3-Clause
description: "ros-humble-unilabos-msgs is a package that provides message definitions for Uni-Lab-OS."
@@ -25,7 +25,7 @@ requirements:
build:
- ${{ compiler('cxx') }}
- ${{ compiler('c') }}
- python ==3.11.14
- python ==3.11.11
- numpy
- if: build_platform != target_platform
then:
@@ -63,14 +63,14 @@ requirements:
- robostack-staging::ros-humble-rosidl-default-generators
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.7
- robostack-staging::ros2-distro-mutex=0.6
run:
- robostack-staging::ros-humble-action-msgs
- robostack-staging::ros-humble-ros-workspace
- robostack-staging::ros-humble-rosidl-default-runtime
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.7
- robostack-staging::ros2-distro-mutex=0.6
- if: osx and x86_64
then:
- __osx >=${{ MACOSX_DEPLOYMENT_TARGET|default('10.14') }}

View File

@@ -1,6 +1,6 @@
package:
name: unilabos
version: "0.10.19"
version: "0.10.13"
source:
path: ../..

View File

@@ -85,7 +85,7 @@ Verification:
-------------
The verify_installation.py script will check:
- Python version (3.11.14)
- Python version (3.11.11)
- ROS2 rclpy installation
- UniLabOS installation and dependencies
@@ -104,7 +104,7 @@ Build Information:
Branch: {branch}
Platform: {platform}
Python: 3.11.14
Python: 3.11.11
Date: {build_date}
Troubleshooting:
@@ -126,7 +126,7 @@ If installation fails:
For more help:
- Documentation: docs/user_guide/installation.md
- Quick Start: QUICK_START_CONDA_PACK.md
- Issues: https://github.com/deepmodeling/Uni-Lab-OS/issues
- Issues: https://github.com/dptech-corp/Uni-Lab-OS/issues
License:
--------
@@ -134,7 +134,7 @@ License:
UniLabOS is licensed under GPL-3.0-only.
See LICENSE file for details.
Repository: https://github.com/deepmodeling/Uni-Lab-OS
Repository: https://github.com/dptech-corp/Uni-Lab-OS
"""
return readme

View File

@@ -1,214 +0,0 @@
#!/usr/bin/env python3
"""
Development installation script for UniLabOS.
Auto-detects Chinese locale and uses appropriate mirror.
Usage:
python scripts/dev_install.py
python scripts/dev_install.py --no-mirror # Force no mirror
python scripts/dev_install.py --china # Force China mirror
python scripts/dev_install.py --skip-deps # Skip pip dependencies installation
Flow:
1. pip install -e . (install unilabos in editable mode)
2. Detect Chinese locale
3. Use uv to install pip dependencies from requirements.txt
4. Special packages (like pylabrobot) are handled by environment_check.py at runtime
"""
import locale
import subprocess
import sys
import argparse
from pathlib import Path
# Tsinghua mirror URL
TSINGHUA_MIRROR = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple"
def is_chinese_locale() -> bool:
"""
Detect if system is in Chinese locale.
Same logic as EnvironmentChecker._is_chinese_locale()
"""
try:
lang = locale.getdefaultlocale()[0]
if lang and ("zh" in lang.lower() or "chinese" in lang.lower()):
return True
except Exception:
pass
return False
def run_command(cmd: list, description: str, retry: int = 2) -> bool:
"""Run command with retry support."""
print(f"[INFO] {description}")
print(f"[CMD] {' '.join(cmd)}")
for attempt in range(retry + 1):
try:
result = subprocess.run(cmd, check=True, timeout=600)
print(f"[OK] {description}")
return True
except subprocess.CalledProcessError as e:
if attempt < retry:
print(f"[WARN] Attempt {attempt + 1} failed, retrying...")
else:
print(f"[ERROR] {description} failed: {e}")
return False
except subprocess.TimeoutExpired:
print(f"[ERROR] {description} timed out")
return False
return False
def install_editable(project_root: Path, use_mirror: bool) -> bool:
"""Install unilabos in editable mode using pip."""
cmd = [sys.executable, "-m", "pip", "install", "-e", str(project_root)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing unilabos in editable mode")
def install_requirements_uv(requirements_file: Path, use_mirror: bool) -> bool:
"""Install pip dependencies using uv (installed via conda-forge::uv)."""
cmd = ["uv", "pip", "install", "-r", str(requirements_file)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing pip dependencies with uv", retry=2)
def install_requirements_pip(requirements_file: Path, use_mirror: bool) -> bool:
"""Fallback: Install pip dependencies using pip."""
cmd = [sys.executable, "-m", "pip", "install", "-r", str(requirements_file)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing pip dependencies with pip", retry=2)
def check_uv_available() -> bool:
"""Check if uv is available (installed via conda-forge::uv)."""
try:
subprocess.run(["uv", "--version"], capture_output=True, check=True)
return True
except (subprocess.CalledProcessError, FileNotFoundError):
return False
def main():
parser = argparse.ArgumentParser(description="Development installation script for UniLabOS")
parser.add_argument("--china", action="store_true", help="Force use China mirror (Tsinghua)")
parser.add_argument("--no-mirror", action="store_true", help="Force use default PyPI (no mirror)")
parser.add_argument(
"--skip-deps", action="store_true", help="Skip pip dependencies installation (only install unilabos)"
)
parser.add_argument("--use-pip", action="store_true", help="Use pip instead of uv for dependencies")
args = parser.parse_args()
# Determine project root
script_dir = Path(__file__).parent
project_root = script_dir.parent
requirements_file = project_root / "unilabos" / "utils" / "requirements.txt"
if not (project_root / "setup.py").exists():
print(f"[ERROR] setup.py not found in {project_root}")
sys.exit(1)
print("=" * 60)
print("UniLabOS Development Installation")
print("=" * 60)
print(f"Project root: {project_root}")
print()
# Determine mirror usage based on locale
if args.no_mirror:
use_mirror = False
print("[INFO] Mirror disabled by --no-mirror flag")
elif args.china:
use_mirror = True
print("[INFO] China mirror enabled by --china flag")
else:
use_mirror = is_chinese_locale()
if use_mirror:
print("[INFO] Chinese locale detected, using Tsinghua mirror")
else:
print("[INFO] Non-Chinese locale detected, using default PyPI")
print()
# Step 1: Install unilabos in editable mode
print("[STEP 1] Installing unilabos in editable mode...")
if not install_editable(project_root, use_mirror):
print("[ERROR] Failed to install unilabos")
print()
print("Manual fallback:")
if use_mirror:
print(f" pip install -e {project_root} -i {TSINGHUA_MIRROR}")
else:
print(f" pip install -e {project_root}")
sys.exit(1)
print()
# Step 2: Install pip dependencies
if args.skip_deps:
print("[INFO] Skipping pip dependencies installation (--skip-deps)")
else:
print("[STEP 2] Installing pip dependencies...")
if not requirements_file.exists():
print(f"[WARN] Requirements file not found: {requirements_file}")
print("[INFO] Skipping dependencies installation")
else:
# Try uv first (faster), fallback to pip
if args.use_pip:
print("[INFO] Using pip (--use-pip flag)")
success = install_requirements_pip(requirements_file, use_mirror)
elif check_uv_available():
print("[INFO] Using uv (installed via conda-forge::uv)")
success = install_requirements_uv(requirements_file, use_mirror)
if not success:
print("[WARN] uv failed, falling back to pip...")
success = install_requirements_pip(requirements_file, use_mirror)
else:
print("[WARN] uv not available (should be installed via: mamba install conda-forge::uv)")
print("[INFO] Falling back to pip...")
success = install_requirements_pip(requirements_file, use_mirror)
if not success:
print()
print("[WARN] Failed to install some dependencies automatically.")
print("You can manually install them:")
if use_mirror:
print(f" uv pip install -r {requirements_file} -i {TSINGHUA_MIRROR}")
print(" or:")
print(f" pip install -r {requirements_file} -i {TSINGHUA_MIRROR}")
else:
print(f" uv pip install -r {requirements_file}")
print(" or:")
print(f" pip install -r {requirements_file}")
print()
print("=" * 60)
print("Installation complete!")
print("=" * 60)
print()
print("Note: Some special packages (like pylabrobot) are installed")
print("automatically at runtime by unilabos if needed.")
print()
print("Verify installation:")
print(' python -c "import unilabos; print(unilabos.__version__)"')
print()
print("If you encounter issues, you can manually install dependencies:")
if use_mirror:
print(f" uv pip install -r unilabos/utils/requirements.txt -i {TSINGHUA_MIRROR}")
else:
print(" uv pip install -r unilabos/utils/requirements.txt")
print()
if __name__ == "__main__":
main()

View File

@@ -2,6 +2,7 @@ import json
import logging
import traceback
import uuid
import xml.etree.ElementTree as ET
from typing import Any, Dict, List
import networkx as nx
@@ -24,15 +25,7 @@ class SimpleGraph:
def add_edge(self, source, target, **attrs):
"""添加边"""
# edge = {"source": source, "target": target, **attrs}
edge = {
"source": source, "target": target,
"source_node_uuid": source,
"target_node_uuid": target,
"source_handle_io": "source",
"target_handle_io": "target",
**attrs
}
edge = {"source": source, "target": target, **attrs}
self.edges.append(edge)
def to_dict(self):
@@ -49,7 +42,6 @@ class SimpleGraph:
"multigraph": False,
"graph": {},
"nodes": nodes_list,
"edges": self.edges,
"links": self.edges,
}
@@ -66,8 +58,495 @@ def extract_json_from_markdown(text: str) -> str:
return text
def convert_to_type(val: str) -> Any:
"""将字符串值转换为适当的数据类型"""
if val == "True":
return True
if val == "False":
return False
if val == "?":
return None
if val.endswith(" g"):
return float(val.split(" ")[0])
if val.endswith("mg"):
return float(val.split("mg")[0])
elif val.endswith("mmol"):
return float(val.split("mmol")[0]) / 1000
elif val.endswith("mol"):
return float(val.split("mol")[0])
elif val.endswith("ml"):
return float(val.split("ml")[0])
elif val.endswith("RPM"):
return float(val.split("RPM")[0])
elif val.endswith(" °C"):
return float(val.split(" ")[0])
elif val.endswith(" %"):
return float(val.split(" ")[0])
return val
def refactor_data(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""统一的数据重构函数,根据操作类型自动选择模板"""
refactored_data = []
# 定义操作映射,包含生物实验和有机化学的所有操作
OPERATION_MAPPING = {
# 生物实验操作
"transfer_liquid": "SynBioFactory-liquid_handler.prcxi-transfer_liquid",
"transfer": "SynBioFactory-liquid_handler.biomek-transfer",
"incubation": "SynBioFactory-liquid_handler.biomek-incubation",
"move_labware": "SynBioFactory-liquid_handler.biomek-move_labware",
"oscillation": "SynBioFactory-liquid_handler.biomek-oscillation",
# 有机化学操作
"HeatChillToTemp": "SynBioFactory-workstation-HeatChillProtocol",
"StopHeatChill": "SynBioFactory-workstation-HeatChillStopProtocol",
"StartHeatChill": "SynBioFactory-workstation-HeatChillStartProtocol",
"HeatChill": "SynBioFactory-workstation-HeatChillProtocol",
"Dissolve": "SynBioFactory-workstation-DissolveProtocol",
"Transfer": "SynBioFactory-workstation-TransferProtocol",
"Evaporate": "SynBioFactory-workstation-EvaporateProtocol",
"Recrystallize": "SynBioFactory-workstation-RecrystallizeProtocol",
"Filter": "SynBioFactory-workstation-FilterProtocol",
"Dry": "SynBioFactory-workstation-DryProtocol",
"Add": "SynBioFactory-workstation-AddProtocol",
}
UNSUPPORTED_OPERATIONS = ["Purge", "Wait", "Stir", "ResetHandling"]
for step in data:
operation = step.get("action")
if not operation or operation in UNSUPPORTED_OPERATIONS:
continue
# 处理重复操作
if operation == "Repeat":
times = step.get("times", step.get("parameters", {}).get("times", 1))
sub_steps = step.get("steps", step.get("parameters", {}).get("steps", []))
for i in range(int(times)):
sub_data = refactor_data(sub_steps)
refactored_data.extend(sub_data)
continue
# 获取模板名称
template = OPERATION_MAPPING.get(operation)
if not template:
# 自动推断模板类型
if operation.lower() in ["transfer", "incubation", "move_labware", "oscillation"]:
template = f"SynBioFactory-liquid_handler.biomek-{operation}"
else:
template = f"SynBioFactory-workstation-{operation}Protocol"
# 创建步骤数据
step_data = {
"template": template,
"description": step.get("description", step.get("purpose", f"{operation} operation")),
"lab_node_type": "Device",
"parameters": step.get("parameters", step.get("action_args", {})),
}
refactored_data.append(step_data)
return refactored_data
def build_protocol_graph(
labware_info: List[Dict[str, Any]], protocol_steps: List[Dict[str, Any]], workstation_name: str
) -> SimpleGraph:
"""统一的协议图构建函数,根据设备类型自动选择构建逻辑"""
G = SimpleGraph()
resource_last_writer = {}
LAB_NAME = "SynBioFactory"
protocol_steps = refactor_data(protocol_steps)
# 检查协议步骤中的模板来判断协议类型
has_biomek_template = any(
("biomek" in step.get("template", "")) or ("prcxi" in step.get("template", ""))
for step in protocol_steps
)
if has_biomek_template:
# 生物实验协议图构建
for labware_id, labware in labware_info.items():
node_id = str(uuid.uuid4())
labware_attrs = labware.copy()
labware_id = labware_attrs.pop("id", labware_attrs.get("name", f"labware_{uuid.uuid4()}"))
labware_attrs["description"] = labware_id
labware_attrs["lab_node_type"] = (
"Reagent" if "Plate" in str(labware_id) else "Labware" if "Rack" in str(labware_id) else "Sample"
)
labware_attrs["device_id"] = workstation_name
G.add_node(node_id, template=f"{LAB_NAME}-host_node-create_resource", **labware_attrs)
resource_last_writer[labware_id] = f"{node_id}:labware"
# 处理协议步骤
prev_node = None
for i, step in enumerate(protocol_steps):
node_id = str(uuid.uuid4())
G.add_node(node_id, **step)
# 添加控制流边
if prev_node is not None:
G.add_edge(prev_node, node_id, source_port="ready", target_port="ready")
prev_node = node_id
# 处理物料流
params = step.get("parameters", {})
if "sources" in params and params["sources"] in resource_last_writer:
source_node, source_port = resource_last_writer[params["sources"]].split(":")
G.add_edge(source_node, node_id, source_port=source_port, target_port="labware")
if "targets" in params:
resource_last_writer[params["targets"]] = f"{node_id}:labware"
# 添加协议结束节点
end_id = str(uuid.uuid4())
G.add_node(end_id, template=f"{LAB_NAME}-liquid_handler.biomek-run_protocol")
if prev_node is not None:
G.add_edge(prev_node, end_id, source_port="ready", target_port="ready")
else:
# 有机化学协议图构建
WORKSTATION_ID = workstation_name
# 为所有labware创建资源节点
for item_id, item in labware_info.items():
# item_id = item.get("id") or item.get("name", f"item_{uuid.uuid4()}")
node_id = str(uuid.uuid4())
# 判断节点类型
if item.get("type") == "hardware" or "reactor" in str(item_id).lower():
if "reactor" not in str(item_id).lower():
continue
lab_node_type = "Sample"
description = f"Prepare Reactor: {item_id}"
liquid_type = []
liquid_volume = []
else:
lab_node_type = "Reagent"
description = f"Add Reagent to Flask: {item_id}"
liquid_type = [item_id]
liquid_volume = [1e5]
G.add_node(
node_id,
template=f"{LAB_NAME}-host_node-create_resource",
description=description,
lab_node_type=lab_node_type,
res_id=item_id,
device_id=WORKSTATION_ID,
class_name="container",
parent=WORKSTATION_ID,
bind_locations={"x": 0.0, "y": 0.0, "z": 0.0},
liquid_input_slot=[-1],
liquid_type=liquid_type,
liquid_volume=liquid_volume,
slot_on_deck="",
role=item.get("role", ""),
)
resource_last_writer[item_id] = f"{node_id}:labware"
last_control_node_id = None
# 处理协议步骤
for step in protocol_steps:
node_id = str(uuid.uuid4())
G.add_node(node_id, **step)
# 控制流
if last_control_node_id is not None:
G.add_edge(last_control_node_id, node_id, source_port="ready", target_port="ready")
last_control_node_id = node_id
# 物料流
params = step.get("parameters", {})
input_resources = {
"Vessel": params.get("vessel"),
"ToVessel": params.get("to_vessel"),
"FromVessel": params.get("from_vessel"),
"reagent": params.get("reagent"),
"solvent": params.get("solvent"),
"compound": params.get("compound"),
"sources": params.get("sources"),
"targets": params.get("targets"),
}
for target_port, resource_name in input_resources.items():
if resource_name and resource_name in resource_last_writer:
source_node, source_port = resource_last_writer[resource_name].split(":")
G.add_edge(source_node, node_id, source_port=source_port, target_port=target_port)
output_resources = {
"VesselOut": params.get("vessel"),
"FromVesselOut": params.get("from_vessel"),
"ToVesselOut": params.get("to_vessel"),
"FiltrateOut": params.get("filtrate_vessel"),
"reagent": params.get("reagent"),
"solvent": params.get("solvent"),
"compound": params.get("compound"),
"sources_out": params.get("sources"),
"targets_out": params.get("targets"),
}
for source_port, resource_name in output_resources.items():
if resource_name:
resource_last_writer[resource_name] = f"{node_id}:{source_port}"
return G
def draw_protocol_graph(protocol_graph: SimpleGraph, output_path: str):
"""
(辅助功能) 使用 networkx 和 matplotlib 绘制协议工作流图,用于可视化。
"""
if not protocol_graph:
print("Cannot draw graph: Graph object is empty.")
return
G = nx.DiGraph()
for node_id, attrs in protocol_graph.nodes.items():
label = attrs.get("description", attrs.get("template", node_id[:8]))
G.add_node(node_id, label=label, **attrs)
for edge in protocol_graph.edges:
G.add_edge(edge["source"], edge["target"])
plt.figure(figsize=(20, 15))
try:
pos = nx.nx_agraph.graphviz_layout(G, prog="dot")
except Exception:
pos = nx.shell_layout(G) # Fallback layout
node_labels = {node: data["label"] for node, data in G.nodes(data=True)}
nx.draw(
G,
pos,
with_labels=False,
node_size=2500,
node_color="skyblue",
node_shape="o",
edge_color="gray",
width=1.5,
arrowsize=15,
)
nx.draw_networkx_labels(G, pos, labels=node_labels, font_size=8, font_weight="bold")
plt.title("Chemical Protocol Workflow Graph", size=15)
plt.savefig(output_path, dpi=300, bbox_inches="tight")
plt.close()
print(f" - Visualization saved to '{output_path}'")
from networkx.drawing.nx_agraph import to_agraph
import re
COMPASS = {"n","e","s","w","ne","nw","se","sw","c"}
def _is_compass(port: str) -> bool:
return isinstance(port, str) and port.lower() in COMPASS
def draw_protocol_graph_with_ports(protocol_graph, output_path: str, rankdir: str = "LR"):
"""
使用 Graphviz 端口语法绘制协议工作流图。
- 若边上的 source_port/target_port 是 compassn/e/s/w/...),直接用 compass。
- 否则自动为节点创建 record 形状并定义命名端口 <portname>。
最终由 PyGraphviz 渲染并输出到 output_path后缀决定格式如 .png/.svg/.pdf
"""
if not protocol_graph:
print("Cannot draw graph: Graph object is empty.")
return
# 1) 先用 networkx 搭建有向图,保留端口属性
G = nx.DiGraph()
for node_id, attrs in protocol_graph.nodes.items():
label = attrs.get("description", attrs.get("template", node_id[:8]))
# 保留一个干净的“中心标签”,用于放在 record 的中间槽
G.add_node(node_id, _core_label=str(label), **{k:v for k,v in attrs.items() if k not in ("label",)})
edges_data = []
in_ports_by_node = {} # 收集命名输入端口
out_ports_by_node = {} # 收集命名输出端口
for edge in protocol_graph.edges:
u = edge["source"]
v = edge["target"]
sp = edge.get("source_port")
tp = edge.get("target_port")
# 记录到图里(保留原始端口信息)
G.add_edge(u, v, source_port=sp, target_port=tp)
edges_data.append((u, v, sp, tp))
# 如果不是 compass就按“命名端口”先归类等会儿给节点造 record
if sp and not _is_compass(sp):
out_ports_by_node.setdefault(u, set()).add(str(sp))
if tp and not _is_compass(tp):
in_ports_by_node.setdefault(v, set()).add(str(tp))
# 2) 转为 AGraph使用 Graphviz 渲染
A = to_agraph(G)
A.graph_attr.update(rankdir=rankdir, splines="true", concentrate="false", fontsize="10")
A.node_attr.update(shape="box", style="rounded,filled", fillcolor="lightyellow", color="#999999", fontname="Helvetica")
A.edge_attr.update(arrowsize="0.8", color="#666666")
# 3) 为需要命名端口的节点设置 record 形状与 label
# 左列 = 输入端口;中间 = 核心标签;右列 = 输出端口
for n in A.nodes():
node = A.get_node(n)
core = G.nodes[n].get("_core_label", n)
in_ports = sorted(in_ports_by_node.get(n, []))
out_ports = sorted(out_ports_by_node.get(n, []))
# 如果该节点涉及命名端口,则用 record否则保留原 box
if in_ports or out_ports:
def port_fields(ports):
if not ports:
return " " # 必须留一个空槽占位
# 每个端口一个小格子,<p> name
return "|".join(f"<{re.sub(r'[^A-Za-z0-9_:.|-]', '_', p)}> {p}" for p in ports)
left = port_fields(in_ports)
right = port_fields(out_ports)
# 三栏:左(入) | 中(节点名) | 右(出)
record_label = f"{{ {left} | {core} | {right} }}"
node.attr.update(shape="record", label=record_label)
else:
# 没有命名端口:普通盒子,显示核心标签
node.attr.update(label=str(core))
# 4) 给边设置 headport / tailport
# - 若端口为 compass直接用 compasse.g., headport="e"
# - 若端口为命名端口:使用在 record 中定义的 <port> 名(同名即可)
for (u, v, sp, tp) in edges_data:
e = A.get_edge(u, v)
# Graphviz 属性tail 是源head 是目标
if sp:
if _is_compass(sp):
e.attr["tailport"] = sp.lower()
else:
# 与 record label 中 <port> 名一致;特殊字符已在 label 中做了清洗
e.attr["tailport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(sp))
if tp:
if _is_compass(tp):
e.attr["headport"] = tp.lower()
else:
e.attr["headport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(tp))
# 可选:若想让边更贴边缘,可设置 constraint/spline 等
# e.attr["arrowhead"] = "vee"
# 5) 输出
A.draw(output_path, prog="dot")
print(f" - Port-aware workflow rendered to '{output_path}'")
def flatten_xdl_procedure(procedure_elem: ET.Element) -> List[ET.Element]:
"""展平嵌套的XDL程序结构"""
flattened_operations = []
TEMP_UNSUPPORTED_PROTOCOL = ["Purge", "Wait", "Stir", "ResetHandling"]
def extract_operations(element: ET.Element):
if element.tag not in ["Prep", "Reaction", "Workup", "Purification", "Procedure"]:
if element.tag not in TEMP_UNSUPPORTED_PROTOCOL:
flattened_operations.append(element)
for child in element:
extract_operations(child)
for child in procedure_elem:
extract_operations(child)
return flattened_operations
def parse_xdl_content(xdl_content: str) -> tuple:
"""解析XDL内容"""
try:
xdl_content_cleaned = "".join(c for c in xdl_content if c.isprintable())
root = ET.fromstring(xdl_content_cleaned)
synthesis_elem = root.find("Synthesis")
if synthesis_elem is None:
return None, None, None
# 解析硬件组件
hardware_elem = synthesis_elem.find("Hardware")
hardware = []
if hardware_elem is not None:
hardware = [{"id": c.get("id"), "type": c.get("type")} for c in hardware_elem.findall("Component")]
# 解析试剂
reagents_elem = synthesis_elem.find("Reagents")
reagents = []
if reagents_elem is not None:
reagents = [{"name": r.get("name"), "role": r.get("role", "")} for r in reagents_elem.findall("Reagent")]
# 解析程序
procedure_elem = synthesis_elem.find("Procedure")
if procedure_elem is None:
return None, None, None
flattened_operations = flatten_xdl_procedure(procedure_elem)
return hardware, reagents, flattened_operations
except ET.ParseError as e:
raise ValueError(f"Invalid XDL format: {e}")
def convert_xdl_to_dict(xdl_content: str) -> Dict[str, Any]:
"""
将XDL XML格式转换为标准的字典格式
Args:
xdl_content: XDL XML内容
Returns:
转换结果,包含步骤和器材信息
"""
try:
hardware, reagents, flattened_operations = parse_xdl_content(xdl_content)
if hardware is None:
return {"error": "Failed to parse XDL content", "success": False}
# 将XDL元素转换为字典格式
steps_data = []
for elem in flattened_operations:
# 转换参数类型
parameters = {}
for key, val in elem.attrib.items():
converted_val = convert_to_type(val)
if converted_val is not None:
parameters[key] = converted_val
step_dict = {
"operation": elem.tag,
"parameters": parameters,
"description": elem.get("purpose", f"Operation: {elem.tag}"),
}
steps_data.append(step_dict)
# 合并硬件和试剂为统一的labware_info格式
labware_data = []
labware_data.extend({"id": hw["id"], "type": "hardware", **hw} for hw in hardware)
labware_data.extend({"name": reagent["name"], "type": "reagent", **reagent} for reagent in reagents)
return {
"success": True,
"steps": steps_data,
"labware": labware_data,
"message": f"Successfully converted XDL to dict format. Found {len(steps_data)} steps and {len(labware_data)} labware items.",
}
except Exception as e:
error_msg = f"XDL conversion failed: {str(e)}"
logger.error(error_msg)
return {"error": error_msg, "success": False}
def create_workflow(

View File

@@ -4,7 +4,7 @@ package_name = 'unilabos'
setup(
name=package_name,
version='0.10.19',
version='0.10.13',
packages=find_packages(),
include_package_data=True,
install_requires=['setuptools'],

View File

@@ -1,7 +0,0 @@
"""
测试包根目录。
让 `tests.*` 模块可以被正常 import例如给 `unilabos` 下的测试入口使用)。
"""

View File

@@ -1,296 +0,0 @@
"""
批量转运编译器测试
覆盖单物料退化、刚好一批、多批次、空操作、AGV 配置发现、children dict 状态。
"""
import pytest
import networkx as nx
from unilabos.compile.batch_transfer_protocol import generate_batch_transfer_protocol
from unilabos.compile.agv_transfer_protocol import generate_agv_transfer_protocol
from unilabos.compile._agv_utils import find_agv_config, get_agv_capacity, split_batches
# ============ 构建测试用设备图 ============
def _make_graph(capacity_x=2, capacity_y=1, capacity_z=1):
"""构建包含 AGV 节点的测试设备图"""
G = nx.DiGraph()
# AGV 节点
G.add_node("AGV", **{
"type": "device",
"class_": "agv_transport_station",
"config": {
"protocol_type": ["AGVTransferProtocol", "BatchTransferProtocol"],
"device_roles": {
"navigator": "zhixing_agv",
"arm": "zhixing_ur_arm"
},
"route_table": {
"StationA->StationB": {
"nav_command": '{"target": "LM1"}',
"arm_pick": '{"task_name": "pick.urp"}',
"arm_place": '{"task_name": "place.urp"}'
},
"AGV->StationA": {
"nav_command": '{"target": "LM1"}',
"arm_pick": '{"task_name": "pick.urp"}',
"arm_place": '{"task_name": "place.urp"}'
},
"StationA->StationA": {
"nav_command": '{"target": "LM1"}',
"arm_pick": '{"task_name": "pick.urp"}',
"arm_place": '{"task_name": "place.urp"}'
},
}
}
})
# AGV 子设备
G.add_node("zhixing_agv", type="device", class_="zhixing_agv")
G.add_node("zhixing_ur_arm", type="device", class_="zhixing_ur_arm")
G.add_edge("AGV", "zhixing_agv")
G.add_edge("AGV", "zhixing_ur_arm")
# AGV Warehouse 子资源
G.add_node("agv_platform", **{
"type": "warehouse",
"config": {
"name": "agv_platform",
"num_items_x": capacity_x,
"num_items_y": capacity_y,
"num_items_z": capacity_z,
}
})
G.add_edge("AGV", "agv_platform")
# 来源/目标工站
G.add_node("StationA", type="device", class_="workstation")
G.add_node("StationB", type="device", class_="workstation")
return G
def _make_repos(items_count=2):
"""构建测试用的 from_repo 和 to_repo dict"""
children = {}
for i in range(items_count):
pos = f"A{i + 1:02d}"
children[pos] = {
"id": f"resource_{i + 1}",
"name": f"R{i + 1}",
"parent": "StationA",
"type": "resource",
}
from_repo = {
"StationA": {
"id": "StationA",
"name": "StationA",
"children": children,
}
}
to_repo = {
"StationB": {
"id": "StationB",
"name": "StationB",
"children": {},
}
}
return from_repo, to_repo
def _make_items(count=2):
"""构建 transfer_resources / from_positions / to_positions"""
resources = [
{
"id": f"resource_{i + 1}",
"name": f"R{i + 1}",
"sample_id": f"uuid-{i + 1}",
"parent": "StationA",
"type": "resource",
}
for i in range(count)
]
from_positions = [f"A{i + 1:02d}" for i in range(count)]
to_positions = [f"A{i + 1:02d}" for i in range(count)]
return resources, from_positions, to_positions
# ============ _agv_utils 测试 ============
class TestAGVUtils:
def test_find_agv_config(self):
G = _make_graph()
cfg = find_agv_config(G)
assert cfg["agv_id"] == "AGV"
assert cfg["device_roles"]["navigator"] == "zhixing_agv"
assert cfg["device_roles"]["arm"] == "zhixing_ur_arm"
assert "StationA->StationB" in cfg["route_table"]
def test_find_agv_config_by_id(self):
G = _make_graph()
cfg = find_agv_config(G, agv_id="AGV")
assert cfg["agv_id"] == "AGV"
def test_find_agv_config_not_found(self):
G = nx.DiGraph()
G.add_node("SomeDevice", type="device", class_="pump")
with pytest.raises(ValueError, match="未找到 AGV"):
find_agv_config(G)
def test_get_agv_capacity(self):
G = _make_graph(capacity_x=2, capacity_y=1, capacity_z=1)
assert get_agv_capacity(G, "AGV") == 2
def test_get_agv_capacity_multi_layer(self):
G = _make_graph(capacity_x=1, capacity_y=2, capacity_z=3)
assert get_agv_capacity(G, "AGV") == 6
def test_split_batches_exact(self):
assert split_batches([1, 2], 2) == [[1, 2]]
def test_split_batches_overflow(self):
assert split_batches([1, 2, 3], 2) == [[1, 2], [3]]
def test_split_batches_single(self):
assert split_batches([1], 4) == [[1]]
def test_split_batches_zero_capacity(self):
with pytest.raises(ValueError):
split_batches([1], 0)
# ============ 批量转运编译器测试 ============
class TestBatchTransferProtocol:
def test_empty_items(self):
"""空物料列表返回空 steps"""
G = _make_graph()
from_repo, to_repo = _make_repos(0)
steps = generate_batch_transfer_protocol(G, from_repo, to_repo, [], [], [])
assert steps == []
def test_single_item(self):
"""单物料转运BatchTransfer 退化为单物料)"""
G = _make_graph(capacity_x=2)
from_repo, to_repo = _make_repos(1)
resources, from_pos, to_pos = _make_items(1)
steps = generate_batch_transfer_protocol(G, from_repo, to_repo, resources, from_pos, to_pos)
# 应该有: nav到来源 + 1个pick + nav到目标 + 1个place = 4 steps
assert len(steps) == 4
assert steps[0]["action_name"] == "send_nav_task"
assert steps[1]["action_name"] == "move_pos_task"
assert steps[1]["_transfer_meta"]["phase"] == "pick"
assert steps[2]["action_name"] == "send_nav_task"
assert steps[3]["action_name"] == "move_pos_task"
assert steps[3]["_transfer_meta"]["phase"] == "place"
def test_exact_capacity(self):
"""物料数 = AGV 容量,刚好一批"""
G = _make_graph(capacity_x=2)
from_repo, to_repo = _make_repos(2)
resources, from_pos, to_pos = _make_items(2)
steps = generate_batch_transfer_protocol(G, from_repo, to_repo, resources, from_pos, to_pos)
# nav + 2 pick + nav + 2 place = 6 steps
assert len(steps) == 6
pick_steps = [s for s in steps if s.get("_transfer_meta", {}).get("phase") == "pick"]
place_steps = [s for s in steps if s.get("_transfer_meta", {}).get("phase") == "place"]
assert len(pick_steps) == 2
assert len(place_steps) == 2
def test_multi_batch(self):
"""物料数 > AGV 容量,自动分批"""
G = _make_graph(capacity_x=2)
from_repo, to_repo = _make_repos(3)
resources, from_pos, to_pos = _make_items(3)
steps = generate_batch_transfer_protocol(G, from_repo, to_repo, resources, from_pos, to_pos)
# 批次1: nav + 2 pick + nav + 2 place + nav(返回) = 7
# 批次2: nav + 1 pick + nav + 1 place = 4
# 总计 11 steps
assert len(steps) == 11
nav_steps = [s for s in steps if s["action_name"] == "send_nav_task"]
# 批次1: 2 nav(去来源+去目标) + 1 nav(返回) + 批次2: 2 nav = 5 nav
assert len(nav_steps) == 5
def test_children_dict_updated(self):
"""compile 阶段三方 children dict 状态正确"""
G = _make_graph(capacity_x=2)
from_repo, to_repo = _make_repos(2)
resources, from_pos, to_pos = _make_items(2)
assert "A01" in from_repo["StationA"]["children"]
assert "A02" in from_repo["StationA"]["children"]
assert len(to_repo["StationB"]["children"]) == 0
generate_batch_transfer_protocol(G, from_repo, to_repo, resources, from_pos, to_pos)
# compile 后 from_repo 的 children 应该被 pop 掉
assert "A01" not in from_repo["StationA"]["children"]
assert "A02" not in from_repo["StationA"]["children"]
# to_repo 应该有新物料
assert "A01" in to_repo["StationB"]["children"]
assert "A02" in to_repo["StationB"]["children"]
assert to_repo["StationB"]["children"]["A01"]["id"] == "resource_1"
def test_device_ids_from_config(self):
"""设备 ID 全部从配置读取,不硬编码"""
G = _make_graph()
from_repo, to_repo = _make_repos(1)
resources, from_pos, to_pos = _make_items(1)
steps = generate_batch_transfer_protocol(G, from_repo, to_repo, resources, from_pos, to_pos)
device_ids = {s["device_id"] for s in steps}
assert "zhixing_agv" in device_ids
assert "zhixing_ur_arm" in device_ids
def test_route_not_found(self):
"""路由表中无对应路线时报错"""
G = _make_graph()
from_repo = {"Unknown": {"id": "Unknown", "children": {"A01": {"id": "R1", "parent": "Unknown"}}}}
to_repo = {"Other": {"id": "Other", "children": {}}}
resources = [{"id": "R1", "name": "R1"}]
with pytest.raises(KeyError, match="路由表"):
generate_batch_transfer_protocol(G, from_repo, to_repo, resources, ["A01"], ["B01"])
def test_length_mismatch(self):
"""三个数组长度不一致时报错"""
G = _make_graph()
from_repo, to_repo = _make_repos(2)
resources = [{"id": "R1"}]
with pytest.raises(ValueError, match="长度不一致"):
generate_batch_transfer_protocol(G, from_repo, to_repo, resources, ["A01", "A02"], ["B01"])
# ============ 改造后的 AGV 单物料编译器测试 ============
class TestAGVTransferProtocol:
def test_single_transfer_from_config(self):
"""改造后的单物料编译器从 G 读取配置"""
G = _make_graph()
from_repo = {"StationA": {"id": "StationA", "children": {"A01": {"id": "R1", "parent": "StationA"}}}}
to_repo = {"StationB": {"id": "StationB", "children": {}}}
steps = generate_agv_transfer_protocol(G, from_repo, "A01", to_repo, "B01")
assert len(steps) == 2
assert steps[0]["device_id"] == "zhixing_agv"
assert steps[0]["action_name"] == "send_nav_task"
assert steps[1]["device_id"] == "zhixing_ur_arm"
assert steps[1]["action_name"] == "move_pos_task"
def test_children_updated(self):
"""单物料编译后 children dict 正确更新"""
G = _make_graph()
from_repo = {"StationA": {"id": "StationA", "children": {"A01": {"id": "R1", "parent": "StationA"}}}}
to_repo = {"StationB": {"id": "StationB", "children": {}}}
generate_agv_transfer_protocol(G, from_repo, "A01", to_repo, "B01")
assert "A01" not in from_repo["StationA"]["children"]
assert "B01" in to_repo["StationB"]["children"]
assert to_repo["StationB"]["children"]["B01"]["parent"] == "StationB"

View File

@@ -1,706 +0,0 @@
"""
全链路集成测试ROS Goal 转换 → ResourceTreeSet → get_plr_nested_dict → 编译器 → 动作列表
模拟 workstation.py 中的完整路径:
1. host 返回 raw_data模拟 resource_get 响应)
2. ResourceTreeSet.from_raw_dict_list(raw_data) 构建资源树
3. tree.root_node.get_plr_nested_dict() 生成嵌套 dict
4. protocol_kwargs 传给编译器
5. 编译器返回 action_list验证结构和关键字段
"""
import copy
import json
import pytest
import networkx as nx
from unilabos.resources.resource_tracker import (
ResourceDictInstance,
ResourceTreeSet,
)
from unilabos.compile.utils.resource_helper import (
ensure_resource_instance,
resource_to_dict,
get_resource_id,
get_resource_data,
)
from unilabos.compile.utils.vessel_parser import get_vessel
# ============ 构建模拟设备图 ============
def _build_test_graph():
"""构建一个包含常用设备节点的测试图"""
G = nx.DiGraph()
# 容器
G.add_node("reactor_01", **{
"id": "reactor_01",
"name": "reactor_01",
"type": "device",
"class": "virtual_stirrer",
"data": {},
"config": {},
})
# 搅拌设备
G.add_node("stirrer_1", **{
"id": "stirrer_1",
"name": "stirrer_1",
"type": "device",
"class": "virtual_stirrer",
"data": {},
"config": {},
})
G.add_edge("stirrer_1", "reactor_01")
# 加热设备
G.add_node("heatchill_1", **{
"id": "heatchill_1",
"name": "heatchill_1",
"type": "device",
"class": "virtual_heatchill",
"data": {},
"config": {},
})
G.add_edge("heatchill_1", "reactor_01")
# 试剂容器(液体)
G.add_node("flask_water", **{
"id": "flask_water",
"name": "flask_water",
"type": "container",
"class": "",
"data": {"reagent_name": "water", "liquid": [{"liquid_type": "water", "volume": 500.0}]},
"config": {"reagent": "water"},
})
# 固体加样器
G.add_node("solid_dispenser_1", **{
"id": "solid_dispenser_1",
"name": "solid_dispenser_1",
"type": "device",
"class": "solid_dispenser",
"data": {},
"config": {},
})
# 泵
G.add_node("pump_1", **{
"id": "pump_1",
"name": "pump_1",
"type": "device",
"class": "virtual_pump",
"data": {},
"config": {},
})
G.add_edge("flask_water", "pump_1")
G.add_edge("pump_1", "reactor_01")
return G
# ============ 构建模拟 host 返回数据 ============
def _make_raw_resource(
id="reactor_01",
uuid="uuid-reactor-01",
name="reactor_01",
klass="virtual_stirrer",
type_="device",
parent=None,
parent_uuid=None,
data=None,
config=None,
extra=None,
):
"""模拟 host 返回的单个资源 dict与 resource_get 服务响应一致)"""
return {
"id": id,
"uuid": uuid,
"name": name,
"class": klass,
"type": type_,
"parent": parent,
"parent_uuid": parent_uuid or "",
"description": "",
"config": config or {},
"data": data or {},
"extra": extra or {},
"position": {"x": 0.0, "y": 0.0, "z": 0.0},
}
def _simulate_workstation_resource_enrichment(raw_data_list, field_type="unilabos_msgs/Resource"):
"""
模拟 workstation.py 中 resource enrichment 的核心逻辑:
raw_data → ResourceTreeSet.from_raw_dict_list → get_plr_nested_dict → protocol_kwargs[k]
"""
tree_set = ResourceTreeSet.from_raw_dict_list(raw_data_list)
if field_type == "unilabos_msgs/Resource":
# 单个 Resource取第一棵树的根节点
root_instance = tree_set.trees[0].root_node if tree_set.trees else None
return root_instance.get_plr_nested_dict() if root_instance else {}
else:
# sequence<Resource>:返回列表
return [tree.root_node.get_plr_nested_dict() for tree in tree_set.trees]
# ============ 全链路测试Stir 协议 ============
class TestStirProtocolFullChain:
"""Stir 协议全链路host raw_data → enriched dict → compiler → action_list"""
def test_stir_with_enriched_resource_dict(self):
"""单个 Resource 经过 enrichment 后传给 stir compiler"""
from unilabos.compile.stir_protocol import generate_stir_protocol
raw_data = [_make_raw_resource(
id="reactor_01", uuid="uuid-reactor-01",
klass="virtual_stirrer", type_="device",
)]
# 模拟 workstation enrichment
enriched_vessel = _simulate_workstation_resource_enrichment(raw_data)
assert enriched_vessel["id"] == "reactor_01"
assert enriched_vessel["uuid"] == "uuid-reactor-01"
assert enriched_vessel["class"] == "virtual_stirrer"
# 传给编译器
G = _build_test_graph()
actions = generate_stir_protocol(
G=G,
vessel=enriched_vessel,
time="60",
stir_speed=300.0,
)
assert isinstance(actions, list)
assert len(actions) >= 1
action = actions[0]
assert action["device_id"] == "stirrer_1"
assert action["action_name"] == "stir"
assert "vessel" in action["action_kwargs"]
assert action["action_kwargs"]["vessel"]["id"] == "reactor_01"
def test_stir_with_resource_dict_instance(self):
"""直接用 ResourceDictInstance 传给 stir compiler通过 get_plr_nested_dict 转换)"""
from unilabos.compile.stir_protocol import generate_stir_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
tree_set = ResourceTreeSet.from_raw_dict_list(raw_data)
inst = tree_set.trees[0].root_node
# 通过 resource_to_dict 转换resource_helper 兼容层)
vessel_dict = resource_to_dict(inst)
assert isinstance(vessel_dict, dict)
assert vessel_dict["id"] == "reactor_01"
G = _build_test_graph()
actions = generate_stir_protocol(G=G, vessel=vessel_dict, time="30")
assert len(actions) >= 1
assert actions[0]["action_name"] == "stir"
def test_stir_with_string_vessel(self):
"""兼容旧模式:直接传 vessel 字符串"""
from unilabos.compile.stir_protocol import generate_stir_protocol
G = _build_test_graph()
actions = generate_stir_protocol(G=G, vessel="reactor_01", time="30")
assert len(actions) >= 1
assert actions[0]["device_id"] == "stirrer_1"
assert actions[0]["action_kwargs"]["vessel"]["id"] == "reactor_01"
# ============ 全链路测试HeatChill 协议 ============
class TestHeatChillProtocolFullChain:
"""HeatChill 协议全链路"""
def test_heatchill_with_enriched_resource(self):
from unilabos.compile.heatchill_protocol import generate_heat_chill_protocol
raw_data = [_make_raw_resource(id="reactor_01", klass="virtual_stirrer")]
enriched_vessel = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_heat_chill_protocol(
G=G,
vessel=enriched_vessel,
temp=80.0,
time="300",
)
assert isinstance(actions, list)
assert len(actions) >= 1
action = actions[0]
assert action["device_id"] == "heatchill_1"
assert action["action_name"] == "heat_chill"
assert action["action_kwargs"]["temp"] == 80.0
def test_heatchill_start_with_enriched_resource(self):
from unilabos.compile.heatchill_protocol import generate_heat_chill_start_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched_vessel = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_heat_chill_start_protocol(
G=G,
vessel=enriched_vessel,
temp=60.0,
)
assert len(actions) >= 1
assert actions[0]["action_name"] == "heat_chill_start"
assert actions[0]["action_kwargs"]["temp"] == 60.0
def test_heatchill_stop_with_enriched_resource(self):
from unilabos.compile.heatchill_protocol import generate_heat_chill_stop_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched_vessel = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_heat_chill_stop_protocol(G=G, vessel=enriched_vessel)
assert len(actions) >= 1
assert actions[0]["action_name"] == "heat_chill_stop"
# ============ 全链路测试Add 协议 ============
class TestAddProtocolFullChain:
"""Add 协议全链路vessel enrichment + reagent 查找 + 泵传输"""
def test_add_solid_with_enriched_resource(self):
from unilabos.compile.add_protocol import generate_add_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched_vessel = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_add_protocol(
G=G,
vessel=enriched_vessel,
reagent="NaCl",
mass="5 g",
)
assert isinstance(actions, list)
assert len(actions) >= 1
# 应该包含至少一个 add_solid 或 log_message 动作
action_names = [a.get("action_name", "") for a in actions]
assert any(name in ["add_solid", "log_message"] for name in action_names)
def test_add_liquid_with_enriched_resource(self):
from unilabos.compile.add_protocol import generate_add_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched_vessel = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_add_protocol(
G=G,
vessel=enriched_vessel,
reagent="water",
volume="10 mL",
)
assert isinstance(actions, list)
assert len(actions) >= 1
# ============ 全链路测试ResourceDictInstance 兼容层 ============
class TestResourceDictInstanceCompatibility:
"""验证编译器兼容层对 ResourceDictInstance 的处理"""
def test_get_vessel_from_enriched_dict(self):
"""get_vessel 对 enriched dict 的处理"""
raw_data = [_make_raw_resource(
id="reactor_01",
data={"temperature": 25.0, "liquid": [{"liquid_type": "water", "volume": 10.0}]},
)]
enriched = _simulate_workstation_resource_enrichment(raw_data)
vessel_id, vessel_data = get_vessel(enriched)
assert vessel_id == "reactor_01"
assert vessel_data["temperature"] == 25.0
assert len(vessel_data["liquid"]) == 1
def test_get_vessel_from_resource_instance(self):
"""get_vessel 直接对 ResourceDictInstance 的处理"""
raw_data = [_make_raw_resource(
id="reactor_01",
data={"temperature": 25.0},
)]
tree_set = ResourceTreeSet.from_raw_dict_list(raw_data)
inst = tree_set.trees[0].root_node
vessel_id, vessel_data = get_vessel(inst)
assert vessel_id == "reactor_01"
assert vessel_data["temperature"] == 25.0
def test_ensure_resource_instance_round_trip(self):
"""ensure_resource_instance → resource_to_dict 无损往返"""
raw_data = [_make_raw_resource(
id="reactor_01", uuid="uuid-r01", klass="virtual_stirrer",
data={"temp": 25.0},
)]
enriched = _simulate_workstation_resource_enrichment(raw_data)
# dict → ResourceDictInstance
inst = ensure_resource_instance(enriched)
assert isinstance(inst, ResourceDictInstance)
assert inst.res_content.id == "reactor_01"
assert inst.res_content.uuid == "uuid-r01"
# ResourceDictInstance → dict
d = resource_to_dict(inst)
assert isinstance(d, dict)
assert d["id"] == "reactor_01"
assert d["uuid"] == "uuid-r01"
assert d["class"] == "virtual_stirrer"
# ============ 全链路测试:带 children 的资源树 ============
class TestResourceTreeWithChildren:
"""测试带 children 结构的资源树通过编译器的路径"""
def _make_tree_with_children(self):
"""构建 StationA -> [Flask1, Flask2] 的资源树"""
return [
_make_raw_resource(
id="StationA", uuid="uuid-station-a",
name="StationA", klass="workstation", type_="device",
),
_make_raw_resource(
id="Flask1", uuid="uuid-flask-1",
name="Flask1", klass="", type_="resource",
parent="StationA", parent_uuid="uuid-station-a",
data={"liquid": [{"liquid_type": "water", "volume": 10.0}]},
),
_make_raw_resource(
id="Flask2", uuid="uuid-flask-2",
name="Flask2", klass="", type_="resource",
parent="StationA", parent_uuid="uuid-station-a",
data={"liquid": [{"liquid_type": "ethanol", "volume": 5.0}]},
),
]
def test_enrichment_preserves_children_structure(self):
"""验证 enrichment 后 children 为嵌套 dict"""
raw_data = self._make_tree_with_children()
enriched = _simulate_workstation_resource_enrichment(raw_data)
assert enriched["id"] == "StationA"
assert "children" in enriched
assert isinstance(enriched["children"], dict)
assert "Flask1" in enriched["children"]
assert "Flask2" in enriched["children"]
def test_children_preserve_uuid_and_data(self):
"""验证 children 中的 uuid 和 data 被正确保留"""
raw_data = self._make_tree_with_children()
enriched = _simulate_workstation_resource_enrichment(raw_data)
flask1 = enriched["children"]["Flask1"]
assert flask1["uuid"] == "uuid-flask-1"
assert flask1["data"]["liquid"][0]["liquid_type"] == "water"
assert flask1["data"]["liquid"][0]["volume"] == 10.0
flask2 = enriched["children"]["Flask2"]
assert flask2["uuid"] == "uuid-flask-2"
assert flask2["data"]["liquid"][0]["liquid_type"] == "ethanol"
def test_children_dict_can_be_popped(self):
"""模拟 batch_transfer_protocol 中 pop children 的操作"""
raw_data = self._make_tree_with_children()
enriched = _simulate_workstation_resource_enrichment(raw_data)
# batch_transfer_protocol 中会 pop children
children = enriched["children"]
popped = children.pop("Flask1")
assert popped["id"] == "Flask1"
assert "Flask1" not in enriched["children"]
assert "Flask2" in enriched["children"]
def test_children_dict_usable_as_from_repo(self):
"""模拟 batch_transfer_protocol 中 from_repo 参数"""
raw_data = self._make_tree_with_children()
enriched = _simulate_workstation_resource_enrichment(raw_data)
# 模拟编译器接收的 from_repo 格式
from_repo = {"StationA": enriched}
from_repo_ = list(from_repo.values())[0]
assert from_repo_["id"] == "StationA"
assert "Flask1" in from_repo_["children"]
assert from_repo_["children"]["Flask1"]["uuid"] == "uuid-flask-1"
def test_sequence_resource_enrichment(self):
"""sequence<Resource> 情况:多个独立资源树"""
raw_data1 = [_make_raw_resource(id="R1", uuid="uuid-r1")]
raw_data2 = [_make_raw_resource(id="R2", uuid="uuid-r2")]
tree_set1 = ResourceTreeSet.from_raw_dict_list(raw_data1)
tree_set2 = ResourceTreeSet.from_raw_dict_list(raw_data2)
results = [
tree.root_node.get_plr_nested_dict()
for ts in [tree_set1, tree_set2]
for tree in ts.trees
]
assert len(results) == 2
assert results[0]["id"] == "R1"
assert results[1]["id"] == "R2"
# ============ 全链路测试:动作列表结构验证 ============
class TestActionListStructure:
"""验证编译器返回的 action_list 结构符合 workstation 预期"""
def _validate_action(self, action):
"""验证单个 action dict 的结构"""
if action.get("action_name") == "wait":
# wait 伪动作不需要 device_id
assert "action_kwargs" in action
assert "time" in action["action_kwargs"]
return
if action.get("action_name") == "log_message":
# log 伪动作
assert "action_kwargs" in action
return
# 正常设备动作
assert "device_id" in action, f"action 缺少 device_id: {action}"
assert "action_name" in action, f"action 缺少 action_name: {action}"
assert "action_kwargs" in action, f"action 缺少 action_kwargs: {action}"
assert isinstance(action["action_kwargs"], dict)
def test_stir_action_list_structure(self):
from unilabos.compile.stir_protocol import generate_stir_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_stir_protocol(G=G, vessel=enriched, time="60")
for action in actions:
if isinstance(action, list):
# 并行动作
for sub_action in action:
self._validate_action(sub_action)
else:
self._validate_action(action)
def test_heatchill_action_list_structure(self):
from unilabos.compile.heatchill_protocol import generate_heat_chill_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_heat_chill_protocol(G=G, vessel=enriched, temp=80.0, time="60")
for action in actions:
if isinstance(action, list):
for sub_action in action:
self._validate_action(sub_action)
else:
self._validate_action(action)
def test_add_action_list_structure(self):
from unilabos.compile.add_protocol import generate_add_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
actions = generate_add_protocol(G=G, vessel=enriched, reagent="NaCl", mass="5 g")
for action in actions:
if isinstance(action, list):
for sub_action in action:
self._validate_action(sub_action)
else:
self._validate_action(action)
# ============ 全链路测试message_converter 到 enrichment ============
class TestMessageConverterToEnrichment:
"""模拟从 ROS 消息转换后的 dict 到 enrichment 的完整链路"""
def test_ros_goal_conversion_simulation(self):
"""
模拟 workstation.py 中的完整流程:
1. ROS goal 中的 vessel 字段被 convert_from_ros_msg 转换为浅层 dict
2. workstation 用 resource_id 请求 host 获取完整资源数据
3. ResourceTreeSet.from_raw_dict_list 构建资源树
4. get_plr_nested_dict 生成嵌套 dict 替换 protocol_kwargs[k]
"""
# 步骤1: 模拟 convert_from_ros_msg 的输出(浅层 dict只有 id 等基本字段)
shallow_vessel = {
"id": "reactor_01",
"uuid": "uuid-reactor-01",
"name": "reactor_01",
"type": "device",
"category": "virtual_stirrer",
"children": [],
"parent": "",
"parent_uuid": "",
"config": {},
"data": {},
"extra": {},
"position": {"x": 0.0, "y": 0.0, "z": 0.0},
}
protocol_kwargs = {
"vessel": shallow_vessel,
"time": "300",
"stir_speed": 300.0,
}
# 步骤2: 提取 resource_id
resource_id = protocol_kwargs["vessel"]["id"]
assert resource_id == "reactor_01"
# 步骤3: 模拟 host 返回完整数据(带 children
host_response = [
_make_raw_resource(
id="reactor_01", uuid="uuid-reactor-01",
klass="virtual_stirrer", type_="device",
data={"temperature": 25.0, "pressure": 1.0},
config={"max_temp": 300.0},
),
]
# 步骤4: enrichment
enriched = _simulate_workstation_resource_enrichment(host_response)
protocol_kwargs["vessel"] = enriched
# 验证 enrichment 后的 protocol_kwargs
assert protocol_kwargs["vessel"]["id"] == "reactor_01"
assert protocol_kwargs["vessel"]["uuid"] == "uuid-reactor-01"
assert protocol_kwargs["vessel"]["class"] == "virtual_stirrer"
assert protocol_kwargs["vessel"]["data"]["temperature"] == 25.0
assert protocol_kwargs["vessel"]["config"]["max_temp"] == 300.0
# 步骤5: 传给编译器
from unilabos.compile.stir_protocol import generate_stir_protocol
G = _build_test_graph()
actions = generate_stir_protocol(G=G, **protocol_kwargs)
assert len(actions) >= 1
assert actions[0]["device_id"] == "stirrer_1"
assert actions[0]["action_name"] == "stir"
def test_ros_goal_with_children_enrichment(self):
"""ROS goal → enrichment 带 children 的场景batch transfer"""
# 模拟 host 返回带 children 的数据
host_response = [
_make_raw_resource(
id="StationA", uuid="uuid-sa", klass="workstation", type_="device",
config={"num_items_x": 4, "num_items_y": 2},
),
_make_raw_resource(
id="Plate1", uuid="uuid-p1", type_="resource",
parent="StationA", parent_uuid="uuid-sa",
data={"sample": "sample_A"},
),
_make_raw_resource(
id="Plate2", uuid="uuid-p2", type_="resource",
parent="StationA", parent_uuid="uuid-sa",
data={"sample": "sample_B"},
),
]
enriched = _simulate_workstation_resource_enrichment(host_response)
assert enriched["id"] == "StationA"
assert enriched["class"] == "workstation"
assert len(enriched["children"]) == 2
assert enriched["children"]["Plate1"]["data"]["sample"] == "sample_A"
assert enriched["children"]["Plate2"]["uuid"] == "uuid-p2"
# 模拟 batch_transfer 的 from_repo 格式
from_repo = {"StationA": enriched}
from_repo_ = list(from_repo.values())[0]
assert "Plate1" in from_repo_["children"]
assert from_repo_["children"]["Plate1"]["uuid"] == "uuid-p1"
# ============ 全链路测试:多协议连续调用 ============
class TestMultiProtocolChain:
"""模拟连续执行多个协议(如 add → stir → heatchill"""
def test_sequential_protocol_execution(self):
"""模拟典型合成路径add → stir → heatchill"""
from unilabos.compile.stir_protocol import generate_stir_protocol
from unilabos.compile.heatchill_protocol import generate_heat_chill_protocol
from unilabos.compile.add_protocol import generate_add_protocol
raw_data = [_make_raw_resource(
id="reactor_01", uuid="uuid-reactor-01",
klass="virtual_stirrer", type_="device",
)]
enriched = _simulate_workstation_resource_enrichment(raw_data)
G = _build_test_graph()
# 每次调用用 enriched 的副本,避免编译器修改原数据
all_actions = []
# 步骤1: 添加试剂
add_actions = generate_add_protocol(
G=G, vessel=copy.deepcopy(enriched),
reagent="NaCl", mass="5 g",
)
all_actions.extend(add_actions)
# 步骤2: 搅拌
stir_actions = generate_stir_protocol(
G=G, vessel=copy.deepcopy(enriched),
time="60", stir_speed=300.0,
)
all_actions.extend(stir_actions)
# 步骤3: 加热
heat_actions = generate_heat_chill_protocol(
G=G, vessel=copy.deepcopy(enriched),
temp=80.0, time="300",
)
all_actions.extend(heat_actions)
# 验证总动作列表
assert len(all_actions) >= 3
# 每个协议至少产生一个核心动作
action_names = [a.get("action_name", "") for a in all_actions if isinstance(a, dict)]
assert "stir" in action_names
assert "heat_chill" in action_names
def test_enriched_resource_not_mutated(self):
"""验证编译器不应修改传入的 enriched dict如果需要修改应 deepcopy"""
from unilabos.compile.stir_protocol import generate_stir_protocol
raw_data = [_make_raw_resource(id="reactor_01")]
enriched = _simulate_workstation_resource_enrichment(raw_data)
original_id = enriched["id"]
original_uuid = enriched["uuid"]
G = _build_test_graph()
generate_stir_protocol(G=G, vessel=enriched, time="60")
# 验证 enriched dict 核心字段未被修改
assert enriched["id"] == original_id
assert enriched["uuid"] == original_uuid

View File

@@ -1,538 +0,0 @@
"""
PumpTransfer 和 Separate 全链路测试
构建包含泵/阀门/分液漏斗的完整设备图,
输出完整的中间数据(最短路径、泵骨架、动作列表等)。
"""
import copy
import json
import pprint
import pytest
import networkx as nx
from unilabos.resources.resource_tracker import ResourceTreeSet
from unilabos.compile.utils.resource_helper import get_resource_id, get_resource_data
from unilabos.compile.utils.vessel_parser import get_vessel
def _make_raw_resource(id, uuid=None, name=None, klass="", type_="device",
parent=None, parent_uuid=None, data=None, config=None, extra=None):
return {
"id": id,
"uuid": uuid or f"uuid-{id}",
"name": name or id,
"class": klass,
"type": type_,
"parent": parent,
"parent_uuid": parent_uuid or "",
"description": "",
"config": config or {},
"data": data or {},
"extra": extra or {},
"position": {"x": 0.0, "y": 0.0, "z": 0.0},
}
def _simulate_enrichment(raw_data_list):
tree_set = ResourceTreeSet.from_raw_dict_list(raw_data_list)
root = tree_set.trees[0].root_node if tree_set.trees else None
return root.get_plr_nested_dict() if root else {}
def _build_pump_transfer_graph():
"""
构建带泵/阀门的设备图,用于测试 PumpTransfer:
flask_water (container)
valve_1 (multiway_valve, pump_1 连接)
reactor_01 (device)
同时有: stirrer_1, heatchill_1, separator_1
"""
G = nx.DiGraph()
# 源容器
G.add_node("flask_water", **{
"id": "flask_water", "name": "flask_water",
"type": "container", "class": "",
"data": {"reagent_name": "water", "liquid": [{"liquid_type": "water", "volume": 200.0}]},
"config": {"reagent": "water"},
})
# 多通阀
G.add_node("valve_1", **{
"id": "valve_1", "name": "valve_1",
"type": "device", "class": "multiway_valve",
"data": {}, "config": {},
})
# 注射泵(连接到阀门)
G.add_node("pump_1", **{
"id": "pump_1", "name": "pump_1",
"type": "device", "class": "virtual_pump",
"data": {}, "config": {"max_volume": 25.0},
})
# 目标容器
G.add_node("reactor_01", **{
"id": "reactor_01", "name": "reactor_01",
"type": "device", "class": "virtual_stirrer",
"data": {"liquid": [{"liquid_type": "water", "volume": 50.0}]},
"config": {},
})
# 搅拌器
G.add_node("stirrer_1", **{
"id": "stirrer_1", "name": "stirrer_1",
"type": "device", "class": "virtual_stirrer",
"data": {}, "config": {},
})
# 加热器
G.add_node("heatchill_1", **{
"id": "heatchill_1", "name": "heatchill_1",
"type": "device", "class": "virtual_heatchill",
"data": {}, "config": {},
})
# 分离器
G.add_node("separator_1", **{
"id": "separator_1", "name": "separator_1",
"type": "device", "class": "separator_controller",
"data": {}, "config": {},
})
# 废液容器
G.add_node("waste_workup", **{
"id": "waste_workup", "name": "waste_workup",
"type": "container", "class": "",
"data": {}, "config": {},
})
# 产物收集瓶
G.add_node("product_flask", **{
"id": "product_flask", "name": "product_flask",
"type": "container", "class": "",
"data": {}, "config": {},
})
# DCM溶剂瓶
G.add_node("flask_dcm", **{
"id": "flask_dcm", "name": "flask_dcm",
"type": "container", "class": "",
"data": {"reagent_name": "dcm", "liquid": [{"liquid_type": "dcm", "volume": 500.0}]},
"config": {"reagent": "dcm"},
})
# 边连接 —— flask_water → valve_1 → reactor_01
G.add_edge("flask_water", "valve_1", port={"valve_1": "port_1"})
G.add_edge("valve_1", "reactor_01", port={"valve_1": "port_2"})
# 阀门 → 泵
G.add_edge("valve_1", "pump_1")
G.add_edge("pump_1", "valve_1")
# 搅拌器 ↔ reactor
G.add_edge("stirrer_1", "reactor_01")
# 加热器 ↔ reactor
G.add_edge("heatchill_1", "reactor_01")
# 分离器 ↔ reactor
G.add_edge("separator_1", "reactor_01")
G.add_edge("reactor_01", "separator_1")
# DCM → valve → reactor (同一泵路)
G.add_edge("flask_dcm", "valve_1", port={"valve_1": "port_3"})
# reactor → valve → product/waste
G.add_edge("valve_1", "product_flask", port={"valve_1": "port_4"})
G.add_edge("valve_1", "waste_workup", port={"valve_1": "port_5"})
return G
def _format_action(action, indent=0):
"""格式化单个 action 为可读字符串"""
prefix = " " * indent
if isinstance(action, list):
# 并行动作
lines = [f"{prefix}[PARALLEL]"]
for sub in action:
lines.append(_format_action(sub, indent + 1))
return "\n".join(lines)
name = action.get("action_name", "?")
device = action.get("device_id", "")
kwargs = action.get("action_kwargs", {})
comment = action.get("_comment", "")
meta = action.get("_transfer_meta", "")
parts = [f"{prefix}{device}::{name}"]
if kwargs:
# 精简输出
kw_str = ", ".join(f"{k}={v}" for k, v in kwargs.items()
if k not in ("progress_message",))
if kw_str:
parts.append(f" kwargs: {{{kw_str}}}")
if comment:
parts.append(f" # {comment}")
if meta:
parts.append(f" meta: {meta}")
return "\n".join(f"{prefix}{p}" if i > 0 else p for i, p in enumerate(parts))
def _dump_actions(actions, title=""):
"""打印完整动作列表"""
print(f"\n{'='*70}")
print(f" {title}")
print(f" 总动作数: {len(actions)}")
print(f"{'='*70}")
for i, action in enumerate(actions):
print(f"\n [{i:02d}] {_format_action(action, indent=2)}")
print(f"\n{'='*70}\n")
# ==================== PumpTransfer 全链路 ====================
class TestPumpTransferFullChain:
"""PumpTransfer: 包含图路径查找、泵骨架构建、动作序列生成"""
def test_pump_transfer_basic(self):
"""基础泵转移flask_water → valve_1 → reactor_01"""
from unilabos.compile.pump_protocol import generate_pump_protocol
G = _build_pump_transfer_graph()
# 检查最短路径
path = nx.shortest_path(G, "flask_water", "reactor_01")
print(f"\n最短路径: {path}")
assert "valve_1" in path
# 调用编译器
actions = generate_pump_protocol(
G=G,
from_vessel_id="flask_water",
to_vessel_id="reactor_01",
volume=10.0,
flowrate=2.5,
transfer_flowrate=0.5,
)
_dump_actions(actions, "PumpTransfer: flask_water → reactor_01, 10mL")
# 验证
assert isinstance(actions, list)
assert len(actions) > 0
# 应该有 set_valve_position 和 set_position 动作
flat = [a for a in actions if isinstance(a, dict)]
action_names = [a.get("action_name") for a in flat]
print(f"动作名称列表: {action_names}")
assert "set_valve_position" in action_names
assert "set_position" in action_names
def test_pump_transfer_with_rinsing_enriched_vessel(self):
"""pump_with_rinsing 接收 enriched vessel dict"""
from unilabos.compile.pump_protocol import generate_pump_protocol_with_rinsing
G = _build_pump_transfer_graph()
# 模拟 enrichment
from_raw = [_make_raw_resource(
id="flask_water", klass="", type_="container",
data={"reagent_name": "water", "liquid": [{"liquid_type": "water", "volume": 200.0}]},
)]
to_raw = [_make_raw_resource(
id="reactor_01", klass="virtual_stirrer", type_="device",
)]
from_enriched = _simulate_enrichment(from_raw)
to_enriched = _simulate_enrichment(to_raw)
print(f"\nfrom_vessel enriched: {json.dumps(from_enriched, indent=2, ensure_ascii=False)[:300]}...")
print(f"to_vessel enriched: {json.dumps(to_enriched, indent=2, ensure_ascii=False)[:300]}...")
# get_vessel 兼容
fid, fdata = get_vessel(from_enriched)
tid, tdata = get_vessel(to_enriched)
print(f"from_vessel_id={fid}, to_vessel_id={tid}")
assert fid == "flask_water"
assert tid == "reactor_01"
actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=from_enriched,
to_vessel=to_enriched,
volume=15.0,
flowrate=2.5,
transfer_flowrate=0.5,
)
_dump_actions(actions, "PumpTransferWithRinsing: flask_water → reactor_01, 15mL (enriched)")
assert isinstance(actions, list)
assert len(actions) > 0
def test_pump_transfer_multi_batch(self):
"""体积 > max_volume 时自动分批"""
from unilabos.compile.pump_protocol import generate_pump_protocol
G = _build_pump_transfer_graph()
# pump_1 的 max_volume = 25mL转 60mL 应该分 3 批
actions = generate_pump_protocol(
G=G,
from_vessel_id="flask_water",
to_vessel_id="reactor_01",
volume=60.0,
flowrate=2.5,
transfer_flowrate=0.5,
)
_dump_actions(actions, "PumpTransfer 分批: 60mL (max_volume=25mL, 预期 3 批)")
assert len(actions) > 0
# 应该有多轮 set_position
flat = [a for a in actions if isinstance(a, dict)]
set_position_count = sum(1 for a in flat if a.get("action_name") == "set_position")
print(f"set_position 动作数: {set_position_count}")
# 3批 × 2次 (吸液 + 排液) = 6 次 set_position
assert set_position_count >= 6
def test_pump_transfer_no_path(self):
"""无路径时返回空"""
from unilabos.compile.pump_protocol import generate_pump_protocol
G = _build_pump_transfer_graph()
G.add_node("isolated_flask", type="container")
actions = generate_pump_protocol(
G=G,
from_vessel_id="isolated_flask",
to_vessel_id="reactor_01",
volume=10.0,
)
print(f"\n无路径时的动作列表: {actions}")
assert actions == []
def test_pump_backbone_filtering(self):
"""验证泵骨架过滤逻辑(电磁阀被跳过)"""
from unilabos.compile.pump_protocol import generate_pump_protocol
G = _build_pump_transfer_graph()
# 添加电磁阀到路径中
G.add_node("solenoid_valve_1", **{
"type": "device", "class": "solenoid_valve",
"data": {}, "config": {},
})
# flask_water → solenoid_valve_1 → valve_1 → reactor_01
G.remove_edge("flask_water", "valve_1")
G.add_edge("flask_water", "solenoid_valve_1")
G.add_edge("solenoid_valve_1", "valve_1")
path = nx.shortest_path(G, "flask_water", "reactor_01")
print(f"\n含电磁阀的路径: {path}")
assert "solenoid_valve_1" in path
actions = generate_pump_protocol(
G=G,
from_vessel_id="flask_water",
to_vessel_id="reactor_01",
volume=10.0,
)
_dump_actions(actions, "PumpTransfer 含电磁阀: flask_water → solenoid → valve_1 → reactor_01")
# 电磁阀应被跳过,泵骨架只有 valve_1
assert len(actions) > 0
# ==================== Separate 全链路 ====================
class TestSeparateProtocolFullChain:
"""Separate: 包含 bug 确认和正常路径测试"""
def test_separate_bug_line_128_fixed(self):
"""验证 separate_protocol.py:128 的 bug 已修复(不再 crash"""
from unilabos.compile.separate_protocol import generate_separate_protocol
G = _build_pump_transfer_graph()
raw_data = [_make_raw_resource(
id="reactor_01", klass="virtual_stirrer",
data={"liquid": [{"liquid_type": "water", "volume": 100.0}]},
)]
enriched = _simulate_enrichment(raw_data)
# 修复前final_vessel_id, _ = vessel_id 会 crash字符串解包
# 修复后final_vessel_id = vessel_id正常返回 action 列表
result = generate_separate_protocol(
G=G,
vessel=enriched,
purpose="extract",
product_phase="top",
product_vessel="product_flask",
waste_vessel="waste_workup",
solvent="dcm",
volume="100 mL",
)
assert isinstance(result, list)
assert len(result) > 0
def test_separate_manual_workaround(self):
"""
绕过 line 128 bug手动测试分离编译器中可以工作的子函数
"""
from unilabos.compile.separate_protocol import (
find_separator_device,
find_separation_vessel_bottom,
)
from unilabos.compile.utils.vessel_parser import (
find_connected_stirrer,
find_solvent_vessel,
)
from unilabos.compile.utils.unit_parser import parse_volume_input
from unilabos.compile.utils.resource_helper import get_resource_liquid_volume as get_vessel_liquid_volume
G = _build_pump_transfer_graph()
# 1. get_vessel 解析 enriched dict
raw_data = [_make_raw_resource(
id="reactor_01", klass="virtual_stirrer",
data={"liquid": [{"liquid_type": "water", "volume": 100.0}]},
)]
enriched = _simulate_enrichment(raw_data)
vessel_id, vessel_data = get_vessel(enriched)
print(f"\nvessel_id: {vessel_id}")
print(f"vessel_data: {vessel_data}")
assert vessel_id == "reactor_01"
assert vessel_data["liquid"][0]["volume"] == 100.0
# 2. find_separator_device
sep = find_separator_device(G, vessel_id)
print(f"分离器设备: {sep}")
assert sep == "separator_1"
# 3. find_connected_stirrer
stirrer = find_connected_stirrer(G, vessel_id)
print(f"搅拌器设备: {stirrer}")
assert stirrer == "stirrer_1"
# 4. find_solvent_vessel
solvent_v = find_solvent_vessel(G, "dcm")
print(f"DCM溶剂容器: {solvent_v}")
assert solvent_v == "flask_dcm"
# 5. parse_volume_input
vol = parse_volume_input("200 mL")
print(f"体积解析: '200 mL'{vol}")
assert vol == 200.0
vol2 = parse_volume_input("1.5 L")
print(f"体积解析: '1.5 L'{vol2}")
assert vol2 == 1500.0
# 6. get_vessel_liquid_volume
liq_vol = get_vessel_liquid_volume(enriched)
print(f"液体体积 (enriched dict): {liq_vol}")
assert liq_vol == 100.0
# 7. find_separation_vessel_bottom
bottom = find_separation_vessel_bottom(G, vessel_id)
print(f"分离容器底部: {bottom}")
# 当前图中没有命名匹配的底部容器
def test_pump_transfer_for_separate_subflow(self):
"""测试 separate 中调用的 pump 子流程(溶剂添加 → 分液漏斗)"""
from unilabos.compile.pump_protocol import generate_pump_protocol_with_rinsing
G = _build_pump_transfer_graph()
# 模拟分离前的溶剂添加步骤
actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel="flask_dcm",
to_vessel="reactor_01",
volume=100.0,
flowrate=2.5,
transfer_flowrate=0.5,
)
_dump_actions(actions, "Separate 子流程: flask_dcm → reactor_01, 100mL DCM")
assert isinstance(actions, list)
assert len(actions) > 0
# 模拟分离后产物转移
actions2 = generate_pump_protocol_with_rinsing(
G=G,
from_vessel="reactor_01",
to_vessel="product_flask",
volume=50.0,
flowrate=2.5,
transfer_flowrate=0.5,
)
_dump_actions(actions2, "Separate 子流程: reactor_01 → product_flask, 50mL 产物")
assert len(actions2) > 0
# 废液转移
actions3 = generate_pump_protocol_with_rinsing(
G=G,
from_vessel="reactor_01",
to_vessel="waste_workup",
volume=50.0,
flowrate=2.5,
transfer_flowrate=0.5,
)
_dump_actions(actions3, "Separate 子流程: reactor_01 → waste_workup, 50mL 废液")
assert len(actions3) > 0
# ==================== 图路径可视化 ====================
class TestGraphPathVisualization:
"""输出图中关键路径信息"""
def test_all_shortest_paths(self):
"""输出所有容器之间的最短路径"""
G = _build_pump_transfer_graph()
containers = [n for n in G.nodes() if G.nodes[n].get("type") == "container"]
devices = [n for n in G.nodes() if G.nodes[n].get("type") == "device"]
print(f"\n{'='*70}")
print(f" 设备图概览")
print(f"{'='*70}")
print(f" 容器节点 ({len(containers)}): {containers}")
print(f" 设备节点 ({len(devices)}): {devices}")
print(f" 边数: {G.number_of_edges()}")
print(f" 边列表:")
for u, v, data in G.edges(data=True):
port_info = data.get("port", "")
print(f" {u}{v} {port_info if port_info else ''}")
print(f"\n 关键路径:")
pairs = [
("flask_water", "reactor_01"),
("flask_dcm", "reactor_01"),
("reactor_01", "product_flask"),
("reactor_01", "waste_workup"),
("flask_water", "product_flask"),
]
for src, dst in pairs:
try:
path = nx.shortest_path(G, src, dst)
length = len(path) - 1
# 标注路径上的节点类型
annotated = []
for n in path:
ntype = G.nodes[n].get("type", "?")
nclass = G.nodes[n].get("class", "")
annotated.append(f"{n}({ntype}{'/' + nclass if nclass else ''})")
print(f" {src}{dst}: 距离={length}")
print(f" 路径: {''.join(annotated)}")
except nx.NetworkXNoPath:
print(f" {src}{dst}: 无路径!")
print(f"{'='*70}\n")

View File

@@ -1,324 +0,0 @@
"""
ROS Goal → Resource 转换 → 编译器路径的集成测试
覆盖:
1. Resource.msg 新字段(uuid, klass, extra)的往返转换
2. dict → ROS Resource → dict 往返无损
3. ResourceTreeSet → get_plr_nested_dict 保留 children 结构
4. resource_helper 兼容 dict / ResourceDictInstance
5. vessel_parser.get_vessel 兼容 ResourceDictInstance
"""
import json
import pytest
# 不依赖 ROS 的测试 —— 直接测试 resource 处理路径
from unilabos.resources.resource_tracker import (
ResourceDict,
ResourceDictInstance,
ResourceTreeInstance,
ResourceTreeSet,
)
from unilabos.compile.utils.resource_helper import (
ensure_resource_instance,
resource_to_dict,
get_resource_id,
get_resource_data,
get_resource_display_info,
get_resource_liquid_volume,
)
from unilabos.compile.utils.vessel_parser import get_vessel
# ============ 构建测试数据 ============
def _make_resource_dict(
id="reactor_01",
uuid="uuid-reactor-01",
name="reactor_01",
klass="virtual_stirrer",
type_="device",
parent=None,
parent_uuid=None,
data=None,
config=None,
extra=None,
):
return {
"id": id,
"uuid": uuid,
"name": name,
"class": klass,
"type": type_,
"parent": parent,
"parent_uuid": parent_uuid or "",
"description": "",
"config": config or {},
"data": data or {},
"extra": extra or {},
"position": {"x": 1.0, "y": 2.0, "z": 3.0},
}
def _make_resource_instance(id="reactor_01", **kwargs):
d = _make_resource_dict(id=id, **kwargs)
return ResourceDictInstance.get_resource_instance_from_dict(d)
def _make_tree_with_children():
"""构建 StationA -> [R1, R2] 的资源树"""
raw_data = [
_make_resource_dict(
id="StationA",
uuid="uuid-station-a",
name="StationA",
klass="workstation",
type_="device",
),
_make_resource_dict(
id="R1",
uuid="uuid-r1",
name="R1",
klass="",
type_="resource",
parent="StationA",
parent_uuid="uuid-station-a",
data={"liquid": [{"liquid_type": "water", "volume": 10.0}]},
),
_make_resource_dict(
id="R2",
uuid="uuid-r2",
name="R2",
klass="",
type_="resource",
parent="StationA",
parent_uuid="uuid-station-a",
data={"liquid": [{"liquid_type": "ethanol", "volume": 5.0}]},
),
]
tree_set = ResourceTreeSet.from_raw_dict_list(raw_data)
return tree_set
# ============ resource_helper 测试 ============
class TestResourceHelper:
"""测试 resource_helper 对 dict / ResourceDictInstance 的兼容性"""
def test_ensure_resource_instance_from_dict(self):
d = _make_resource_dict()
inst = ensure_resource_instance(d)
assert isinstance(inst, ResourceDictInstance)
assert inst.res_content.id == "reactor_01"
assert inst.res_content.uuid == "uuid-reactor-01"
def test_ensure_resource_instance_passthrough(self):
inst = _make_resource_instance()
result = ensure_resource_instance(inst)
assert result is inst # 同一个对象,不复制
def test_ensure_resource_instance_none(self):
assert ensure_resource_instance(None) is None
def test_get_resource_id_from_dict(self):
d = _make_resource_dict(id="my_device")
assert get_resource_id(d) == "my_device"
def test_get_resource_id_from_instance(self):
inst = _make_resource_instance(id="my_device")
assert get_resource_id(inst) == "my_device"
def test_get_resource_id_from_string(self):
assert get_resource_id("my_device") == "my_device"
def test_get_resource_id_from_wrapped_dict(self):
"""兼容 {station_id: {...}} 格式"""
d = {"StationA": {"id": "StationA", "name": "StationA"}}
assert get_resource_id(d) == "StationA"
def test_get_resource_data_from_dict(self):
d = _make_resource_dict(data={"temperature": 25.0})
assert get_resource_data(d) == {"temperature": 25.0}
def test_get_resource_data_from_instance(self):
inst = _make_resource_instance(data={"temperature": 25.0})
data = get_resource_data(inst)
assert data["temperature"] == 25.0
def test_get_resource_display_info_from_dict(self):
d = _make_resource_dict(id="reactor_01", name="Reactor #1")
info = get_resource_display_info(d)
assert "reactor_01" in info
assert "Reactor #1" in info
def test_get_resource_display_info_from_instance(self):
inst = _make_resource_instance(id="reactor_01", name="Reactor #1")
info = get_resource_display_info(inst)
assert "reactor_01" in info
def test_get_resource_display_info_from_string(self):
assert get_resource_display_info("reactor_01") == "reactor_01"
def test_get_resource_liquid_volume(self):
d = _make_resource_dict(data={"liquid": [{"liquid_type": "water", "volume": 15.5}]})
assert get_resource_liquid_volume(d) == pytest.approx(15.5)
def test_resource_to_dict_from_instance(self):
inst = _make_resource_instance(id="reactor_01", klass="virtual_stirrer")
d = resource_to_dict(inst)
assert isinstance(d, dict)
assert d["id"] == "reactor_01"
assert d["class"] == "virtual_stirrer"
def test_resource_to_dict_passthrough(self):
d = _make_resource_dict()
result = resource_to_dict(d)
assert result is d # 同一个 dict
# ============ vessel_parser 兼容性测试 ============
class TestVesselParser:
"""测试 vessel_parser.get_vessel 对 ResourceDictInstance 的兼容"""
def test_get_vessel_from_dict(self):
d = _make_resource_dict(id="reactor_01", data={"temperature": 25.0})
vessel_id, vessel_data = get_vessel(d)
assert vessel_id == "reactor_01"
assert vessel_data["temperature"] == 25.0
def test_get_vessel_from_string(self):
vessel_id, vessel_data = get_vessel("reactor_01")
assert vessel_id == "reactor_01"
assert vessel_data == {}
def test_get_vessel_from_resource_instance(self):
inst = _make_resource_instance(id="reactor_01", data={"temperature": 25.0})
vessel_id, vessel_data = get_vessel(inst)
assert vessel_id == "reactor_01"
assert vessel_data["temperature"] == 25.0
def test_get_vessel_from_wrapped_dict(self):
"""兼容 {station_id: {id: ..., data: {...}}} 格式"""
d = {"StationA": {"id": "StationA", "data": {"vol": 100}}}
vessel_id, vessel_data = get_vessel(d)
assert vessel_id == "StationA"
# ============ ResourceTreeSet → get_plr_nested_dict 测试 ============
class TestResourceTreeRoundTrip:
"""测试 ResourceTreeSet → get_plr_nested_dict 保留树结构和关键字段"""
def test_tree_preserves_children(self):
tree_set = _make_tree_with_children()
assert len(tree_set.trees) == 1
root = tree_set.trees[0].root_node
assert root.res_content.id == "StationA"
assert len(root.children) == 2
def test_plr_nested_dict_has_children(self):
tree_set = _make_tree_with_children()
root = tree_set.trees[0].root_node
nested = root.get_plr_nested_dict()
assert isinstance(nested, dict)
assert "children" in nested
assert isinstance(nested["children"], dict)
assert "R1" in nested["children"]
assert "R2" in nested["children"]
def test_plr_nested_dict_preserves_uuid(self):
tree_set = _make_tree_with_children()
root = tree_set.trees[0].root_node
nested = root.get_plr_nested_dict()
assert nested["uuid"] == "uuid-station-a"
assert nested["children"]["R1"]["uuid"] == "uuid-r1"
def test_plr_nested_dict_preserves_klass(self):
tree_set = _make_tree_with_children()
root = tree_set.trees[0].root_node
nested = root.get_plr_nested_dict()
assert nested["class"] == "workstation"
def test_plr_nested_dict_preserves_data(self):
tree_set = _make_tree_with_children()
root = tree_set.trees[0].root_node
nested = root.get_plr_nested_dict()
r1_data = nested["children"]["R1"]["data"]
assert "liquid" in r1_data
assert r1_data["liquid"][0]["volume"] == 10.0
def test_plr_nested_dict_usable_by_get_vessel(self):
"""get_plr_nested_dict 的结果可以直接传给 get_vessel"""
tree_set = _make_tree_with_children()
root = tree_set.trees[0].root_node
nested = root.get_plr_nested_dict()
vessel_id, vessel_data = get_vessel(nested)
assert vessel_id == "StationA"
def test_dump_vs_plr_nested_dict(self):
"""dump() 是扁平化的get_plr_nested_dict 保留树结构"""
tree_set = _make_tree_with_children()
# dump 返回扁平列表
dumped = tree_set.dump()
assert isinstance(dumped[0], list)
assert len(dumped[0]) == 3 # StationA + R1 + R2全部扁平
# get_plr_nested_dict 保留嵌套
root = tree_set.trees[0].root_node
nested = root.get_plr_nested_dict()
assert isinstance(nested["children"], dict)
assert len(nested["children"]) == 2 # 嵌套的 children
# ============ 模拟 workstation 路径测试 ============
class TestWorkstationPath:
"""模拟 workstation.py 中的关键路径:
raw_data → ResourceTreeSet.from_raw_dict_list → get_plr_nested_dict → compiler
"""
def test_single_resource_path(self):
"""单个 Resource: 取第一棵树的根节点"""
raw_data = [
_make_resource_dict(id="reactor_01", uuid="uuid-r01", klass="virtual_stirrer"),
]
tree_set = ResourceTreeSet.from_raw_dict_list(raw_data)
root = tree_set.trees[0].root_node
result = root.get_plr_nested_dict()
assert result["id"] == "reactor_01"
assert result["uuid"] == "uuid-r01"
assert result["class"] == "virtual_stirrer"
def test_resource_with_children_path(self):
"""Resource 带 children: AGV/batch transfer 场景"""
tree_set = _make_tree_with_children()
root = tree_set.trees[0].root_node
nested = root.get_plr_nested_dict()
# 模拟编译器接收到的参数
from_repo = {"StationA": nested}
assert "A01" not in from_repo["StationA"]["children"] # children 按 id 索引
assert "R1" in from_repo["StationA"]["children"]
assert from_repo["StationA"]["children"]["R1"]["uuid"] == "uuid-r1"
def test_multiple_resource_path(self):
"""多个 Resource: 每棵树取根节点"""
raw_data1 = [_make_resource_dict(id="R1", uuid="uuid-r1")]
raw_data2 = [_make_resource_dict(id="R2", uuid="uuid-r2")]
# 模拟 host 返回多棵树
tree_set1 = ResourceTreeSet.from_raw_dict_list(raw_data1)
tree_set2 = ResourceTreeSet.from_raw_dict_list(raw_data2)
results = [
tree.root_node.get_plr_nested_dict()
for ts in [tree_set1, tree_set2]
for tree in ts.trees
]
assert len(results) == 2
assert results[0]["id"] == "R1"
assert results[1]["id"] == "R2"

View File

@@ -1 +0,0 @@

View File

@@ -1,5 +0,0 @@
"""
液体处理设备相关测试。
"""

View File

@@ -1,505 +0,0 @@
import asyncio
from dataclasses import dataclass
from typing import Any, Iterable, List, Optional, Sequence, Tuple
import pytest
from unilabos.devices.liquid_handling.liquid_handler_abstract import LiquidHandlerAbstract
@dataclass(frozen=True)
class DummyContainer:
name: str
def __repr__(self) -> str: # pragma: no cover
return f"DummyContainer({self.name})"
@dataclass(frozen=True)
class DummyTipSpot:
name: str
def __repr__(self) -> str: # pragma: no cover
return f"DummyTipSpot({self.name})"
def make_tip_iter(n: int = 256) -> Iterable[List[DummyTipSpot]]:
"""Yield lists so code can safely call `tip.extend(next(self.current_tip))`."""
for i in range(n):
yield [DummyTipSpot(f"tip_{i}")]
class FakeLiquidHandler(LiquidHandlerAbstract):
"""不初始化真实 backend/deck仅用来记录 transfer_liquid 内部调用序列。"""
def __init__(self, channel_num: int = 8):
# 不调用 super().__init__避免真实硬件/后端依赖
self.channel_num = channel_num
self.support_touch_tip = True
self.current_tip = iter(make_tip_iter())
self.calls: List[Tuple[str, Any]] = []
async def pick_up_tips(self, tip_spots, use_channels=None, offsets=None, **backend_kwargs):
self.calls.append(("pick_up_tips", {"tips": list(tip_spots), "use_channels": use_channels}))
async def aspirate(
self,
resources: Sequence[Any],
vols: List[float],
use_channels: Optional[List[int]] = None,
flow_rates: Optional[List[Optional[float]]] = None,
offsets: Any = None,
liquid_height: Any = None,
blow_out_air_volume: Any = None,
spread: str = "wide",
**backend_kwargs,
):
self.calls.append(
(
"aspirate",
{
"resources": list(resources),
"vols": list(vols),
"use_channels": list(use_channels) if use_channels is not None else None,
"flow_rates": list(flow_rates) if flow_rates is not None else None,
"offsets": list(offsets) if offsets is not None else None,
"liquid_height": list(liquid_height) if liquid_height is not None else None,
"blow_out_air_volume": list(blow_out_air_volume) if blow_out_air_volume is not None else None,
},
)
)
async def dispense(
self,
resources: Sequence[Any],
vols: List[float],
use_channels: Optional[List[int]] = None,
flow_rates: Optional[List[Optional[float]]] = None,
offsets: Any = None,
liquid_height: Any = None,
blow_out_air_volume: Any = None,
spread: str = "wide",
**backend_kwargs,
):
self.calls.append(
(
"dispense",
{
"resources": list(resources),
"vols": list(vols),
"use_channels": list(use_channels) if use_channels is not None else None,
"flow_rates": list(flow_rates) if flow_rates is not None else None,
"offsets": list(offsets) if offsets is not None else None,
"liquid_height": list(liquid_height) if liquid_height is not None else None,
"blow_out_air_volume": list(blow_out_air_volume) if blow_out_air_volume is not None else None,
},
)
)
async def discard_tips(self, use_channels=None, *args, **kwargs):
# 有的分支是 discard_tips(use_channels=[0]),有的分支是 discard_tips([0..7])(位置参数)
self.calls.append(("discard_tips", {"use_channels": list(use_channels) if use_channels is not None else None}))
async def custom_delay(self, seconds=0, msg=None):
self.calls.append(("custom_delay", {"seconds": seconds, "msg": msg}))
async def touch_tip(self, targets):
# 原实现会访问 targets.get_size_x() 等;测试里只记录调用
self.calls.append(("touch_tip", {"targets": targets}))
async def mix(self, targets, mix_time=None, mix_vol=None, height_to_bottom=None, offsets=None, mix_rate=None, none_keys=None):
self.calls.append(
(
"mix",
{
"targets": targets,
"mix_time": mix_time,
"mix_vol": mix_vol,
},
)
)
def run(coro):
return asyncio.run(coro)
def test_one_to_one_single_channel_basic_calls():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(64))
sources = [DummyContainer(f"S{i}") for i in range(3)]
targets = [DummyContainer(f"T{i}") for i in range(3)]
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=[0],
asp_vols=[1, 2, 3],
dis_vols=[4, 5, 6],
mix_times=None, # 应该仍能执行(不 mix
)
)
assert [c[0] for c in lh.calls].count("pick_up_tips") == 3
assert [c[0] for c in lh.calls].count("aspirate") == 3
assert [c[0] for c in lh.calls].count("dispense") == 3
assert [c[0] for c in lh.calls].count("discard_tips") == 3
# 每次 aspirate/dispense 都是单孔列表
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
assert aspirates[0]["resources"] == [sources[0]]
assert aspirates[0]["vols"] == [1.0]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert dispenses[2]["resources"] == [targets[2]]
assert dispenses[2]["vols"] == [6.0]
def test_one_to_one_single_channel_before_stage_mixes_prior_to_aspirate():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(16))
source = DummyContainer("S0")
target = DummyContainer("T0")
run(
lh.transfer_liquid(
sources=[source],
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=[5],
dis_vols=[5],
mix_stage="before",
mix_times=1,
mix_vol=3,
)
)
names = [name for name, _ in lh.calls]
assert names.count("mix") == 1
assert names.index("mix") < names.index("aspirate")
def test_one_to_one_eight_channel_groups_by_8():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(256))
sources = [DummyContainer(f"S{i}") for i in range(16)]
targets = [DummyContainer(f"T{i}") for i in range(16)]
asp_vols = list(range(1, 17))
dis_vols = list(range(101, 117))
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=dis_vols,
mix_times=0, # 触发逻辑但不 mix
)
)
# 16 个任务 -> 2 组,每组 8 通道一起做
assert [c[0] for c in lh.calls].count("pick_up_tips") == 2
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert len(aspirates) == 2
assert len(dispenses) == 2
assert aspirates[0]["resources"] == sources[0:8]
assert aspirates[0]["vols"] == [float(v) for v in asp_vols[0:8]]
assert dispenses[1]["resources"] == targets[8:16]
assert dispenses[1]["vols"] == [float(v) for v in dis_vols[8:16]]
def test_one_to_one_eight_channel_requires_multiple_of_8_targets():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(64))
sources = [DummyContainer(f"S{i}") for i in range(9)]
targets = [DummyContainer(f"T{i}") for i in range(9)]
with pytest.raises(ValueError, match="multiple of 8"):
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=[1] * 9,
dis_vols=[1] * 9,
mix_times=0,
)
)
def test_one_to_one_eight_channel_parameter_lists_are_chunked_per_8():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(512))
sources = [DummyContainer(f"S{i}") for i in range(16)]
targets = [DummyContainer(f"T{i}") for i in range(16)]
asp_vols = [i + 1 for i in range(16)]
dis_vols = [200 + i for i in range(16)]
asp_flow_rates = [0.1 * (i + 1) for i in range(16)]
dis_flow_rates = [0.2 * (i + 1) for i in range(16)]
offsets = [f"offset_{i}" for i in range(16)]
liquid_heights = [i * 0.5 for i in range(16)]
blow_out_air_volume = [i + 0.05 for i in range(16)]
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=dis_vols,
asp_flow_rates=asp_flow_rates,
dis_flow_rates=dis_flow_rates,
offsets=offsets,
liquid_height=liquid_heights,
blow_out_air_volume=blow_out_air_volume,
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert len(aspirates) == len(dispenses) == 2
for batch_idx in range(2):
start = batch_idx * 8
end = start + 8
asp_call = aspirates[batch_idx]
dis_call = dispenses[batch_idx]
assert asp_call["resources"] == sources[start:end]
assert asp_call["flow_rates"] == asp_flow_rates[start:end]
assert asp_call["offsets"] == offsets[start:end]
assert asp_call["liquid_height"] == liquid_heights[start:end]
assert asp_call["blow_out_air_volume"] == blow_out_air_volume[start:end]
assert dis_call["flow_rates"] == dis_flow_rates[start:end]
assert dis_call["offsets"] == offsets[start:end]
assert dis_call["liquid_height"] == liquid_heights[start:end]
assert dis_call["blow_out_air_volume"] == blow_out_air_volume[start:end]
def test_one_to_one_eight_channel_handles_32_tasks_four_batches():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(1024))
sources = [DummyContainer(f"S{i}") for i in range(32)]
targets = [DummyContainer(f"T{i}") for i in range(32)]
asp_vols = [i + 1 for i in range(32)]
dis_vols = [300 + i for i in range(32)]
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=dis_vols,
mix_times=0,
)
)
pick_calls = [name for name, _ in lh.calls if name == "pick_up_tips"]
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert len(pick_calls) == 4
assert len(aspirates) == len(dispenses) == 4
assert aspirates[0]["resources"] == sources[0:8]
assert aspirates[-1]["resources"] == sources[24:32]
assert dispenses[0]["resources"] == targets[0:8]
assert dispenses[-1]["resources"] == targets[24:32]
def test_one_to_many_single_channel_aspirates_total_when_asp_vol_too_small():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(64))
source = DummyContainer("SRC")
targets = [DummyContainer(f"T{i}") for i in range(3)]
dis_vols = [10, 20, 30] # sum=60
run(
lh.transfer_liquid(
sources=[source],
targets=targets,
tip_racks=[],
use_channels=[0],
asp_vols=10, # 小于 sum(dis_vols) -> 应吸 60
dis_vols=dis_vols,
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
assert len(aspirates) == 1
assert aspirates[0]["resources"] == [source]
assert aspirates[0]["vols"] == [60.0]
assert aspirates[0]["use_channels"] == [0]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert [d["vols"][0] for d in dispenses] == [10.0, 20.0, 30.0]
def test_one_to_many_eight_channel_basic():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(128))
source = DummyContainer("SRC")
targets = [DummyContainer(f"T{i}") for i in range(8)]
dis_vols = [i + 1 for i in range(8)]
run(
lh.transfer_liquid(
sources=[source],
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=999, # one-to-many 8ch 会按 dis_vols 吸(每通道各自)
dis_vols=dis_vols,
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
assert aspirates[0]["resources"] == [source] * 8
assert aspirates[0]["vols"] == [float(v) for v in dis_vols]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert dispenses[0]["resources"] == targets
assert dispenses[0]["vols"] == [float(v) for v in dis_vols]
def test_many_to_one_single_channel_standard_dispense_equals_asp_by_default():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(128))
sources = [DummyContainer(f"S{i}") for i in range(3)]
target = DummyContainer("T")
asp_vols = [5, 6, 7]
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=asp_vols,
dis_vols=1, # many-to-one 允许标量;非比例模式下实际每次分液=对应 asp_vol
mix_times=0,
)
)
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert [d["vols"][0] for d in dispenses] == [float(v) for v in asp_vols]
assert all(d["resources"] == [target] for d in dispenses)
def test_many_to_one_single_channel_before_stage_mixes_target_once():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(128))
sources = [DummyContainer("S0"), DummyContainer("S1")]
target = DummyContainer("T")
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=[5, 6],
dis_vols=1,
mix_stage="before",
mix_times=2,
mix_vol=4,
)
)
names = [name for name, _ in lh.calls]
assert names[0] == "mix"
assert names.count("mix") == 1
def test_many_to_one_single_channel_proportional_mixing_uses_dis_vols_per_source():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(128))
sources = [DummyContainer(f"S{i}") for i in range(3)]
target = DummyContainer("T")
asp_vols = [5, 6, 7]
dis_vols = [1, 2, 3]
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=asp_vols,
dis_vols=dis_vols, # 比例模式
mix_times=0,
)
)
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert [d["vols"][0] for d in dispenses] == [float(v) for v in dis_vols]
def test_many_to_one_eight_channel_basic():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(256))
sources = [DummyContainer(f"S{i}") for i in range(8)]
target = DummyContainer("T")
asp_vols = [10 + i for i in range(8)]
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=999, # 非比例模式下每通道分液=对应 asp_vol
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert aspirates[0]["resources"] == sources
assert aspirates[0]["vols"] == [float(v) for v in asp_vols]
assert dispenses[0]["resources"] == [target] * 8
assert dispenses[0]["vols"] == [float(v) for v in asp_vols]
def test_transfer_liquid_mode_detection_unsupported_shape_raises():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(64))
sources = [DummyContainer("S0"), DummyContainer("S1")]
targets = [DummyContainer("T0"), DummyContainer("T1"), DummyContainer("T2")]
with pytest.raises(ValueError, match="Unsupported transfer mode"):
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=[0],
asp_vols=[1, 1],
dis_vols=[1, 1, 1],
mix_times=0,
)
)

View File

@@ -1,137 +0,0 @@
"""
AGVTransportStation driver 测试
覆盖初始化、carrier property、slot 查询、路由查询、capacity 计算。
"""
import pytest
from unittest.mock import MagicMock, patch
from unilabos.devices.transport.agv_workstation import AGVTransportStation
from unilabos.resources.warehouse import WareHouse, warehouse_factory
class TestAGVTransportStation:
def _make_driver(self, route_table=None, device_roles=None):
"""创建一个 AGVTransportStation 实例"""
return AGVTransportStation(
deck=None,
route_table=route_table or {
"A->B": {"nav_command": '{"target":"LM1"}', "arm_pick": "pick.urp", "arm_place": "place.urp"}
},
device_roles=device_roles or {"navigator": "agv_nav", "arm": "agv_arm"},
)
def _make_warehouse(self, name="agv_platform", nx=2, ny=1, nz=1):
"""创建一个测试用 Warehouse"""
return warehouse_factory(name=name, num_items_x=nx, num_items_y=ny, num_items_z=nz)
def test_init_deck_none(self):
"""AGVTransportStation 初始化时 deck=None"""
driver = self._make_driver()
assert driver.deck is None
def test_init_route_table(self):
"""路由表正确存储"""
driver = self._make_driver()
assert "A->B" in driver.route_table
def test_init_device_roles(self):
"""设备角色正确存储"""
driver = self._make_driver()
assert driver.device_roles["navigator"] == "agv_nav"
assert driver.device_roles["arm"] == "agv_arm"
def test_carrier_without_ros_node(self):
"""未 post_init 时 carrier 返回 None"""
driver = self._make_driver()
assert driver.carrier is None
def test_carrier_with_warehouse(self):
"""post_init 后 carrier 返回正确的 WareHouse"""
driver = self._make_driver()
wh = self._make_warehouse()
# 模拟 ros_node 和 resource_tracker
mock_ros_node = MagicMock()
mock_ros_node.resource_tracker.resources = [wh]
mock_ros_node.device_id = "AGV"
driver.post_init(mock_ros_node)
assert driver.carrier is wh
assert isinstance(driver.carrier, WareHouse)
def test_capacity(self):
"""容量计算正确"""
driver = self._make_driver()
wh = self._make_warehouse(nx=2, ny=1, nz=1)
mock_ros_node = MagicMock()
mock_ros_node.resource_tracker.resources = [wh]
mock_ros_node.device_id = "AGV"
driver.post_init(mock_ros_node)
assert driver.capacity == 2
def test_capacity_multi_layer(self):
"""多层 Warehouse 容量"""
driver = self._make_driver()
wh = self._make_warehouse(nx=1, ny=2, nz=3)
mock_ros_node = MagicMock()
mock_ros_node.resource_tracker.resources = [wh]
mock_ros_node.device_id = "AGV"
driver.post_init(mock_ros_node)
assert driver.capacity == 6
def test_capacity_no_carrier(self):
"""无 carrier 时容量为 0"""
driver = self._make_driver()
assert driver.capacity == 0
def test_free_slots(self):
"""空载时所有 slot 为空闲"""
driver = self._make_driver()
wh = self._make_warehouse(nx=2, ny=1, nz=1)
mock_ros_node = MagicMock()
mock_ros_node.resource_tracker.resources = [wh]
mock_ros_node.device_id = "AGV"
driver.post_init(mock_ros_node)
free = driver.free_slots
assert len(free) == 2
def test_occupied_slots_empty(self):
"""空载时 occupied_slots 为空"""
driver = self._make_driver()
wh = self._make_warehouse(nx=2, ny=1, nz=1)
mock_ros_node = MagicMock()
mock_ros_node.resource_tracker.resources = [wh]
mock_ros_node.device_id = "AGV"
driver.post_init(mock_ros_node)
assert len(driver.occupied_slots) == 0
def test_resolve_route(self):
"""路由查询返回正确的指令"""
driver = self._make_driver()
route = driver.resolve_route("A", "B")
assert route["nav_command"] == '{"target":"LM1"}'
assert route["arm_pick"] == "pick.urp"
def test_resolve_route_not_found(self):
"""查询不存在的路线时抛出 KeyError"""
driver = self._make_driver()
with pytest.raises(KeyError, match="路由表"):
driver.resolve_route("X", "Y")
def test_get_device_id(self):
"""获取子设备 ID"""
driver = self._make_driver()
assert driver.get_device_id("navigator") == "agv_nav"
assert driver.get_device_id("arm") == "agv_arm"
def test_get_device_id_not_found(self):
"""获取不存在的角色时抛出 KeyError"""
driver = self._make_driver()
with pytest.raises(KeyError, match="未配置设备角色"):
driver.get_device_id("gripper")

View File

@@ -2,8 +2,9 @@ import pytest
import json
import os
from pylabrobot.resources import Resource as ResourcePLR
from unilabos.resources.graphio import resource_bioyond_to_plr
from unilabos.resources.resource_tracker import ResourceTreeSet
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
from unilabos.registry.registry import lab_registry
from unilabos.resources.bioyond.decks import BIOYOND_PolymerReactionStation_Deck

View File

@@ -11,10 +11,10 @@ import os
# 添加项目根目录到路径
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))))
# 导入测试模块(统一从 tests 包获取)
from tests.ros.msgs.test_basic import TestBasicFunctionality
from tests.ros.msgs.test_conversion import TestBasicConversion, TestMappingConversion
from tests.ros.msgs.test_mapping import TestTypeMapping, TestFieldMapping
# 导入测试模块
from test.ros.msgs.test_basic import TestBasicFunctionality
from test.ros.msgs.test_conversion import TestBasicConversion, TestMappingConversion
from test.ros.msgs.test_mapping import TestTypeMapping, TestFieldMapping
def run_tests():

View File

@@ -1,213 +0,0 @@
{
"workflow": [
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines",
"targets": "Liquid_1",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines",
"targets": "Liquid_2",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines",
"targets": "Liquid_3",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines_2",
"targets": "Liquid_4",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines_2",
"targets": "Liquid_5",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines_2",
"targets": "Liquid_6",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines_3",
"targets": "dest_set",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines_3",
"targets": "dest_set_2",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
},
{
"action": "transfer_liquid",
"action_args": {
"sources": "cell_lines_3",
"targets": "dest_set_3",
"asp_vol": 100.0,
"dis_vol": 74.75,
"asp_flow_rate": 94.0,
"dis_flow_rate": 95.5
}
}
],
"reagent": {
"Liquid_1": {
"slot": 1,
"well": [
"A4",
"A7",
"A10"
],
"labware": "rep 1"
},
"Liquid_4": {
"slot": 1,
"well": [
"A4",
"A7",
"A10"
],
"labware": "rep 1"
},
"dest_set": {
"slot": 1,
"well": [
"A4",
"A7",
"A10"
],
"labware": "rep 1"
},
"Liquid_2": {
"slot": 2,
"well": [
"A3",
"A5",
"A8"
],
"labware": "rep 2"
},
"Liquid_5": {
"slot": 2,
"well": [
"A3",
"A5",
"A8"
],
"labware": "rep 2"
},
"dest_set_2": {
"slot": 2,
"well": [
"A3",
"A5",
"A8"
],
"labware": "rep 2"
},
"Liquid_3": {
"slot": 3,
"well": [
"A4",
"A6",
"A10"
],
"labware": "rep 3"
},
"Liquid_6": {
"slot": 3,
"well": [
"A4",
"A6",
"A10"
],
"labware": "rep 3"
},
"dest_set_3": {
"slot": 3,
"well": [
"A4",
"A6",
"A10"
],
"labware": "rep 3"
},
"cell_lines": {
"slot": 4,
"well": [
"A1",
"A3",
"A5"
],
"labware": "DRUG + YOYO-MEDIA"
},
"cell_lines_2": {
"slot": 4,
"well": [
"A1",
"A3",
"A5"
],
"labware": "DRUG + YOYO-MEDIA"
},
"cell_lines_3": {
"slot": 4,
"well": [
"A1",
"A3",
"A5"
],
"labware": "DRUG + YOYO-MEDIA"
}
}
}

View File

@@ -1 +1 @@
__version__ = "0.10.19"
__version__ = "0.10.13"

View File

@@ -1,6 +0,0 @@
"""Entry point for `python -m unilabos`."""
from unilabos.app.main import main
if __name__ == "__main__":
main()

View File

@@ -1,6 +1,6 @@
import threading
from unilabos.resources.resource_tracker import ResourceTreeSet
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
from unilabos.utils import logger

View File

@@ -1,14 +1,13 @@
import argparse
import asyncio
import os
import platform
import shutil
import signal
import subprocess
import sys
import threading
import time
from typing import Dict, Any, List
import networkx as nx
import yaml
@@ -18,93 +17,9 @@ unilabos_dir = os.path.dirname(os.path.dirname(current_dir))
if unilabos_dir not in sys.path:
sys.path.append(unilabos_dir)
from unilabos.app.utils import cleanup_for_restart
from unilabos.utils.banner_print import print_status, print_unilab_banner
from unilabos.config.config import load_config, BasicConfig, HTTPConfig
# Global restart flags (used by ws_client and web/server)
_restart_requested: bool = False
_restart_reason: str = ""
RESTART_EXIT_CODE = 42
def _build_child_argv():
"""Build sys.argv for child process, stripping supervisor-only arguments."""
result = []
skip_next = False
for arg in sys.argv:
if skip_next:
skip_next = False
continue
if arg in ("--restart_mode", "--restart-mode"):
continue
if arg in ("--auto_restart_count", "--auto-restart-count"):
skip_next = True
continue
if arg.startswith("--auto_restart_count=") or arg.startswith("--auto-restart-count="):
continue
result.append(arg)
return result
def _run_as_supervisor(max_restarts: int):
"""
Supervisor process that spawns and monitors child processes.
Similar to Uvicorn's --reload: the supervisor itself does no heavy work,
it only launches the real process as a child and restarts it when the child
exits with RESTART_EXIT_CODE.
"""
child_argv = [sys.executable] + _build_child_argv()
restart_count = 0
print_status(
f"[Supervisor] Restart mode enabled (max restarts: {max_restarts}), "
f"child command: {' '.join(child_argv)}",
"info",
)
while True:
print_status(
f"[Supervisor] Launching process (restart {restart_count}/{max_restarts})...",
"info",
)
try:
process = subprocess.Popen(child_argv)
exit_code = process.wait()
except KeyboardInterrupt:
print_status("[Supervisor] Interrupted, terminating child process...", "info")
process.terminate()
try:
process.wait(timeout=10)
except subprocess.TimeoutExpired:
process.kill()
process.wait()
sys.exit(1)
if exit_code == RESTART_EXIT_CODE:
restart_count += 1
if restart_count > max_restarts:
print_status(
f"[Supervisor] Maximum restart count ({max_restarts}) reached, exiting",
"warning",
)
sys.exit(1)
print_status(
f"[Supervisor] Child requested restart ({restart_count}/{max_restarts}), restarting in 2s...",
"info",
)
time.sleep(2)
else:
if exit_code != 0:
print_status(f"[Supervisor] Child exited with code {exit_code}", "warning")
else:
print_status("[Supervisor] Child exited normally", "info")
sys.exit(exit_code)
def load_config_from_file(config_path):
if config_path is None:
config_path = os.environ.get("UNILABOS_BASICCONFIG_CONFIG_PATH", None)
@@ -126,7 +41,7 @@ def convert_argv_dashes_to_underscores(args: argparse.ArgumentParser):
for i, arg in enumerate(sys.argv):
for option_string in option_strings:
if arg.startswith(option_string):
new_arg = arg[:2] + arg[2 : len(option_string)].replace("-", "_") + arg[len(option_string) :]
new_arg = arg[:2] + arg[2:len(option_string)].replace("-", "_") + arg[len(option_string):]
sys.argv[i] = new_arg
break
@@ -134,8 +49,6 @@ def convert_argv_dashes_to_underscores(args: argparse.ArgumentParser):
def parse_args():
"""解析命令行参数"""
parser = argparse.ArgumentParser(description="Start Uni-Lab Edge server.")
subparsers = parser.add_subparsers(title="Valid subcommands", dest="command")
parser.add_argument("-g", "--graph", help="Physical setup graph file path.")
parser.add_argument("-c", "--controllers", default=None, help="Controllers config file path.")
parser.add_argument(
@@ -145,13 +58,6 @@ def parse_args():
action="append",
help="Path to the registry directory",
)
parser.add_argument(
"--devices",
type=str,
default=None,
action="append",
help="Path to Python code directory for AST-based device/resource scanning",
)
parser.add_argument(
"--working_dir",
type=str,
@@ -241,91 +147,11 @@ def parse_args():
action="store_true",
help="Skip environment dependency check on startup",
)
parser.add_argument(
"--check_mode",
action="store_true",
default=False,
help="Run in check mode for CI: validates registry imports and ensures no file changes",
)
parser.add_argument(
"--complete_registry",
action="store_true",
default=False,
help="Complete and rewrite YAML registry files using AST analysis results",
)
parser.add_argument(
"--no_update_feedback",
action="store_true",
help="Disable sending update feedback to server",
)
parser.add_argument(
"--test_mode",
action="store_true",
default=False,
help="Test mode: all actions simulate execution and return mock results without running real hardware",
)
parser.add_argument(
"--external_devices_only",
action="store_true",
default=False,
help="Only load external device packages (--devices), skip built-in unilabos/devices/ scanning and YAML device registry",
)
parser.add_argument(
"--extra_resource",
action="store_true",
default=False,
help="Load extra lab_ prefixed labware resources (529 auto-generated definitions from lab_resources.py)",
)
parser.add_argument(
"--restart_mode",
action="store_true",
default=False,
help="Enable supervisor mode: automatically restart the process when triggered via WebSocket",
)
parser.add_argument(
"--auto_restart_count",
type=int,
default=500,
help="Maximum number of automatic restarts in restart mode (default: 500)",
)
# workflow upload subcommand
workflow_parser = subparsers.add_parser(
"workflow_upload",
aliases=["wf"],
help="Upload workflow from xdl/json/python files",
)
workflow_parser.add_argument(
"-f",
"--workflow_file",
type=str,
required=True,
help="Path to the workflow file (JSON format)",
)
workflow_parser.add_argument(
"-n",
"--workflow_name",
type=str,
default=None,
help="Workflow name, if not provided will use the name from file or filename",
)
workflow_parser.add_argument(
"--tags",
type=str,
nargs="*",
default=[],
help="Tags for the workflow (space-separated)",
)
workflow_parser.add_argument(
"--published",
action="store_true",
default=False,
help="Whether to publish the workflow (default: False)",
)
workflow_parser.add_argument(
"--description",
type=str,
default="",
help="Workflow description, used when publishing the workflow",
help="Complete registry information",
)
return parser
@@ -338,102 +164,62 @@ def main():
args = parser.parse_args()
args_dict = vars(args)
# Supervisor mode: spawn child processes and monitor for restart
if args_dict.get("restart_mode", False):
_run_as_supervisor(args_dict.get("auto_restart_count", 5))
return
# 环境检查 - 检查并自动安装必需的包 (可选)
skip_env_check = args_dict.get("skip_env_check", False)
check_mode = args_dict.get("check_mode", False)
if not skip_env_check:
from unilabos.utils.environment_check import check_environment, check_device_package_requirements
if not args_dict.get("skip_env_check", False):
from unilabos.utils.environment_check import check_environment
print_status("正在进行环境依赖检查...", "info")
if not check_environment(auto_install=True):
print_status("环境检查失败,程序退出", "error")
os._exit(1)
# 第一次设备包依赖检查build_registry 之前,确保 import map 可用
devices_dirs_for_req = args_dict.get("devices", None)
if devices_dirs_for_req:
if not check_device_package_requirements(devices_dirs_for_req):
print_status("设备包依赖检查失败,程序退出", "error")
os._exit(1)
else:
print_status("跳过环境依赖检查", "warning")
# 加载配置文件优先加载config然后从env读取
config_path = args_dict.get("config")
# === 解析 working_dir ===
# 规则1: working_dir 传入 → 检测 unilabos_data 子目录,已是则不修改
# 规则2: 仅 config_path 传入 → 用其父目录作为 working_dir
# 规则4: 两者都传入 → 各用各的,但 working_dir 仍做 unilabos_data 子目录检测
raw_working_dir = args_dict.get("working_dir")
if raw_working_dir:
working_dir = os.path.abspath(raw_working_dir)
elif config_path and os.path.exists(config_path):
working_dir = os.path.dirname(os.path.abspath(config_path))
else:
if os.getcwd().endswith("unilabos_data"):
working_dir = os.path.abspath(os.getcwd())
else:
working_dir = os.path.abspath(os.path.join(os.getcwd(), "unilabos_data"))
# unilabos_data 子目录自动检测
if os.path.basename(working_dir) != "unilabos_data":
unilabos_data_sub = os.path.join(working_dir, "unilabos_data")
if os.path.isdir(unilabos_data_sub):
working_dir = unilabos_data_sub
elif not raw_working_dir and not (config_path and os.path.exists(config_path)):
# 未显式指定路径,默认使用 cwd/unilabos_data
working_dir = os.path.abspath(os.path.join(os.getcwd(), "unilabos_data"))
# === 解析 config_path ===
if config_path and not os.path.exists(config_path):
# config_path 传入但不存在,尝试在 working_dir 中查找
candidate = os.path.join(working_dir, "local_config.py")
if os.path.exists(candidate):
config_path = candidate
print_status(f"在工作目录中发现配置文件: {config_path}", "info")
else:
print_status(
f"配置文件 {config_path} 不存在,工作目录 {working_dir} 中也未找到 local_config.py"
f"请通过 --config 传入 local_config.py 文件路径",
"error",
)
os._exit(1)
elif not config_path:
# 规则3: 未传入 config_path尝试 working_dir/local_config.py
candidate = os.path.join(working_dir, "local_config.py")
if os.path.exists(candidate):
config_path = candidate
print_status(f"发现本地配置文件: {config_path}", "info")
else:
print_status(f"未指定config路径可通过 --config 传入 local_config.py 文件路径", "info")
print_status(f"您是否为第一次使用?并将当前路径 {working_dir} 作为工作目录? (Y/n)", "info")
if check_mode or input() != "n":
os.makedirs(working_dir, exist_ok=True)
config_path = os.path.join(working_dir, "local_config.py")
shutil.copy(
os.path.join(os.path.dirname(os.path.dirname(__file__)), "config", "example_config.py"),
config_path,
if args_dict.get("working_dir"):
working_dir = args_dict.get("working_dir", "")
if config_path and not os.path.exists(config_path):
config_path = os.path.join(working_dir, "local_config.py")
if not os.path.exists(config_path):
print_status(
f"当前工作目录 {working_dir} 未找到local_config.py请通过 --config 传入 local_config.py 文件路径",
"error",
)
print_status(f"已创建 local_config.py 路径: {config_path}", "info")
else:
os._exit(1)
# 加载配置文件 (check_mode 跳过)
elif config_path and os.path.exists(config_path):
working_dir = os.path.dirname(config_path)
elif os.path.exists(working_dir) and os.path.exists(os.path.join(working_dir, "local_config.py")):
config_path = os.path.join(working_dir, "local_config.py")
elif not config_path and (
not os.path.exists(working_dir) or not os.path.exists(os.path.join(working_dir, "local_config.py"))
):
print_status(f"未指定config路径可通过 --config 传入 local_config.py 文件路径", "info")
print_status(f"您是否为第一次使用?并将当前路径 {working_dir} 作为工作目录? (Y/n)", "info")
if input() != "n":
os.makedirs(working_dir, exist_ok=True)
config_path = os.path.join(working_dir, "local_config.py")
shutil.copy(
os.path.join(os.path.dirname(os.path.dirname(__file__)), "config", "example_config.py"), config_path
)
print_status(f"已创建 local_config.py 路径: {config_path}", "info")
else:
os._exit(1)
# 加载配置文件
print_status(f"当前工作目录为 {working_dir}", "info")
if not check_mode:
load_config_from_file(config_path)
load_config_from_file(config_path)
# 根据配置重新设置日志级别
from unilabos.utils.log import configure_logger, logger
if hasattr(BasicConfig, "log_level"):
logger.info(f"Log level set to '{BasicConfig.log_level}' from config file.")
file_path = configure_logger(loglevel=BasicConfig.log_level, working_dir=working_dir)
if file_path is not None:
logger.info(f"[LOG_FILE] {file_path}")
configure_logger(loglevel=BasicConfig.log_level, working_dir=working_dir)
if args.addr != parser.get_default("addr"):
if args.addr == "test":
@@ -455,12 +241,9 @@ def main():
if args_dict.get("sk", ""):
BasicConfig.sk = args_dict.get("sk", "")
print_status("传入了sk参数优先采用传入参数", "info")
BasicConfig.working_dir = working_dir
workflow_upload = args_dict.get("command") in ("workflow_upload", "wf")
# 使用远程资源启动
if not workflow_upload and args_dict["use_remote_resource"]:
if args_dict["use_remote_resource"]:
print_status("使用远程资源启动", "info")
from unilabos.app.web import http_client
@@ -473,93 +256,41 @@ def main():
BasicConfig.port = args_dict["port"] if args_dict["port"] else BasicConfig.port
BasicConfig.disable_browser = args_dict["disable_browser"] or BasicConfig.disable_browser
BasicConfig.working_dir = working_dir
BasicConfig.is_host_mode = not args_dict.get("is_slave", False)
BasicConfig.slave_no_host = args_dict.get("slave_no_host", False)
BasicConfig.upload_registry = args_dict.get("upload_registry", False)
BasicConfig.no_update_feedback = args_dict.get("no_update_feedback", False)
BasicConfig.test_mode = args_dict.get("test_mode", False)
if BasicConfig.test_mode:
print_status("启用测试模式:所有动作将模拟执行,不调用真实硬件", "warning")
BasicConfig.extra_resource = args_dict.get("extra_resource", False)
if BasicConfig.extra_resource:
print_status("启用额外资源加载将加载lab_开头的labware资源定义", "info")
BasicConfig.communication_protocol = "websocket"
machine_name = platform.node()
machine_name = os.popen("hostname").read().strip()
machine_name = "".join([c if c.isalnum() or c == "_" else "_" for c in machine_name])
BasicConfig.machine_name = machine_name
BasicConfig.vis_2d_enable = args_dict["2d_vis"]
BasicConfig.check_mode = check_mode
from unilabos.registry.registry import build_registry
# 显示启动横幅
print_unilab_banner(args_dict)
# Step 0: AST 分析优先 + YAML 注册表加载
# check_mode 和 upload_registry 都会执行实际 import 验证
devices_dirs = args_dict.get("devices", None)
complete_registry = args_dict.get("complete_registry", False) or check_mode
external_only = args_dict.get("external_devices_only", False)
lab_registry = build_registry(
registry_paths=args_dict["registry_path"],
devices_dirs=devices_dirs,
upload_registry=BasicConfig.upload_registry,
check_mode=check_mode,
complete_registry=complete_registry,
external_only=external_only,
)
# Check mode: 注册表验证完成后直接退出
if check_mode:
device_count = len(lab_registry.device_type_registry)
resource_count = len(lab_registry.resource_type_registry)
print_status(f"Check mode: 注册表验证完成 ({device_count} 设备, {resource_count} 资源),退出", "info")
os._exit(0)
# 以下导入依赖 ROS2 环境check_mode 已退出不需要
from unilabos.resources.graphio import (
read_node_link_json,
read_graphml,
dict_from_graph,
modify_to_backend_format,
)
from unilabos.app.communication import get_communication_client
from unilabos.registry.registry import build_registry
from unilabos.app.backend import start_backend
from unilabos.app.web import http_client
from unilabos.app.web import start_server
from unilabos.app.register import register_devices_and_resources
from unilabos.resources.resource_tracker import ResourceTreeSet, ResourceDict
from unilabos.resources.graphio import modify_to_backend_format
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet, ResourceDict
# Step 1: 上传全部注册表到服务端,同步保存到 unilabos_data
if BasicConfig.upload_registry:
if BasicConfig.ak and BasicConfig.sk:
# print_status("开始注册设备到服务端...", "info")
try:
register_devices_and_resources(lab_registry)
# print_status("设备注册完成", "info")
except Exception as e:
print_status(f"设备注册失败: {e}", "error")
else:
print_status("未提供 ak 和 sk跳过设备注册", "info")
else:
print_status("本次启动注册表不报送云端,如果您需要联网调试,请在启动命令增加--upload_registry", "warning")
# 显示启动横幅
print_unilab_banner(args_dict)
# 处理 workflow_upload 子命令
if workflow_upload:
from unilabos.workflow.wf_utils import handle_workflow_upload_command
handle_workflow_upload_command(args_dict)
print_status("工作流上传完成,程序退出", "info")
os._exit(0)
# 注册表
lab_registry = build_registry(
args_dict["registry_path"], args_dict.get("complete_registry", False), args_dict["upload_registry"]
)
if not BasicConfig.ak or not BasicConfig.sk:
if BasicConfig.test_mode:
print_status("测试模式:跳过 ak/sk 检查,使用占位凭据", "warning")
BasicConfig.ak = BasicConfig.ak or "test_ak"
BasicConfig.sk = BasicConfig.sk or "test_sk"
else:
print_status("后续运行必须拥有一个实验室,请前往 https://uni-lab.bohrium.com 注册实验室!", "warning")
os._exit(1)
print_status("后续运行必须拥有一个实验室,请前往 https://uni-lab.bohrium.com 注册实验室!", "warning")
os._exit(1)
graph: nx.Graph
resource_tree_set: ResourceTreeSet
resource_links: List[Dict[str, Any]]
@@ -626,21 +357,31 @@ def main():
continue
# 如果从远端获取了物料信息,则与本地物料进行同步
if file_path is not None and request_startup_json and "nodes" in request_startup_json:
if request_startup_json and "nodes" in request_startup_json:
print_status("开始同步远端物料到本地...", "info")
remote_tree_set = ResourceTreeSet.from_raw_dict_list(request_startup_json["nodes"])
remote_tree_set = ResourceTreeSet.from_raw_list(request_startup_json["nodes"])
resource_tree_set.merge_remote_resources(remote_tree_set)
print_status("远端物料同步完成", "info")
# 第二次设备包依赖检查云端物料同步后community 包可能引入新的 requirements
# TODO: 当 community device package 功能上线后,在这里调用
# install_requirements_txt(community_pkg_path / "requirements.txt", label="community.xxx")
# 使用 ResourceTreeSet 代替 list
args_dict["resources_config"] = resource_tree_set
args_dict["devices_config"] = resource_tree_set
args_dict["graph"] = graph_res.physical_setup_graph
if BasicConfig.upload_registry:
# 设备注册到服务端 - 需要 ak 和 sk
if BasicConfig.ak and BasicConfig.sk:
print_status("开始注册设备到服务端...", "info")
try:
register_devices_and_resources(lab_registry)
print_status("设备注册完成", "info")
except Exception as e:
print_status(f"设备注册失败: {e}", "error")
else:
print_status("未提供 ak 和 sk跳过设备注册", "info")
else:
print_status("本次启动注册表不报送云端,如果您需要联网调试,请在启动命令增加--upload_registry", "warning")
if args_dict["controllers"] is not None:
args_dict["controllers_config"] = yaml.safe_load(open(args_dict["controllers"], encoding="utf-8"))
else:
@@ -655,7 +396,6 @@ def main():
comm_client = get_communication_client()
if "websocket" in args_dict["app_bridges"]:
args_dict["bridges"].append(comm_client)
def _exit(signum, frame):
comm_client.stop()
sys.exit(0)
@@ -697,13 +437,16 @@ def main():
resource_visualization.start()
except OSError as e:
if "AMENT_PREFIX_PATH" in str(e):
print_status(f"ROS 2环境未正确设置跳过3D可视化启动。错误详情: {e}", "warning")
print_status(
f"ROS 2环境未正确设置跳过3D可视化启动。错误详情: {e}",
"warning"
)
print_status(
"建议解决方案:\n"
"1. 激活Conda环境: conda activate unilab\n"
"2. 或使用 --backend simple 参数\n"
"3. 或使用 --visual disable 参数禁用可视化",
"info",
"info"
)
else:
raise
@@ -711,26 +454,16 @@ def main():
time.sleep(1)
else:
start_backend(**args_dict)
restart_requested = start_server(
start_server(
open_browser=not args_dict["disable_browser"],
port=BasicConfig.port,
)
if restart_requested:
print_status("[Main] Restart requested, cleaning up...", "info")
cleanup_for_restart()
return
else:
start_backend(**args_dict)
# 启动服务器默认支持WebSocket触发重启
restart_requested = start_server(
start_server(
open_browser=not args_dict["disable_browser"],
port=BasicConfig.port,
)
if restart_requested:
print_status("[Main] Restart requested, cleaning up...", "info")
cleanup_for_restart()
os._exit(RESTART_EXIT_CODE)
if __name__ == "__main__":

View File

@@ -54,7 +54,6 @@ class JobAddReq(BaseModel):
action_type: str = Field(
examples=["unilabos_msgs.action._str_single_input.StrSingleInput"], description="action type", default=""
)
sample_material: dict = Field(examples=[{"string": "string"}], description="sample uuid to material uuid")
action_args: dict = Field(examples=[{"string": "string"}], description="action arguments", default_factory=dict)
task_id: str = Field(examples=["task_id"], description="task uuid (auto-generated if empty)", default="")
job_id: str = Field(examples=["job_id"], description="goal uuid (auto-generated if empty)", default="")

View File

@@ -1,8 +1,9 @@
import json
import time
from typing import Any, Dict, Optional, Tuple
from typing import Optional, Tuple, Dict, Any
from unilabos.utils.log import logger
from unilabos.utils.tools import normalize_json as _normalize_device
from unilabos.utils.type_check import TypeEncoder
def register_devices_and_resources(lab_registry, gather_only=False) -> Optional[Tuple[Dict[str, Any], Dict[str, Any]]]:
@@ -10,63 +11,50 @@ def register_devices_and_resources(lab_registry, gather_only=False) -> Optional[
注册设备和资源到服务器仅支持HTTP
"""
# 注册资源信息 - 使用HTTP方式
from unilabos.app.web.client import http_client
logger.info("[UniLab Register] 开始注册设备和资源...")
# 注册设备信息
devices_to_register = {}
for device_info in lab_registry.obtain_registry_device_info():
devices_to_register[device_info["id"]] = _normalize_device(device_info)
logger.trace(f"[UniLab Register] 收集设备: {device_info['id']}")
devices_to_register[device_info["id"]] = json.loads(
json.dumps(device_info, ensure_ascii=False, cls=TypeEncoder)
)
logger.debug(f"[UniLab Register] 收集设备: {device_info['id']}")
resources_to_register = {}
for resource_info in lab_registry.obtain_registry_resource_info():
resources_to_register[resource_info["id"]] = resource_info
logger.trace(f"[UniLab Register] 收集资源: {resource_info['id']}")
logger.debug(f"[UniLab Register] 收集资源: {resource_info['id']}")
if gather_only:
return devices_to_register, resources_to_register
# 注册设备
if devices_to_register:
try:
start_time = time.time()
response = http_client.resource_registry(
{"resources": list(devices_to_register.values())},
tag="device_registry",
)
response = http_client.resource_registry({"resources": list(devices_to_register.values())})
cost_time = time.time() - start_time
res_data = response.json() if response.status_code == 200 else {}
skipped = res_data.get("data", {}).get("skipped", False)
if skipped:
logger.info(
f"[UniLab Register] 设备注册跳过(内容未变化)"
f" {len(devices_to_register)}{cost_time:.3f}s"
)
elif response.status_code in [200, 201]:
logger.info(f"[UniLab Register] 成功注册 {len(devices_to_register)} 个设备 {cost_time:.3f}s")
if response.status_code in [200, 201]:
logger.info(f"[UniLab Register] 成功注册 {len(devices_to_register)} 个设备 {cost_time}ms")
else:
logger.error(f"[UniLab Register] 设备注册失败: {response.status_code}, {response.text} {cost_time:.3f}s")
logger.error(f"[UniLab Register] 设备注册失败: {response.status_code}, {response.text} {cost_time}ms")
except Exception as e:
logger.error(f"[UniLab Register] 设备注册异常: {e}")
# 注册资源
if resources_to_register:
try:
start_time = time.time()
response = http_client.resource_registry(
{"resources": list(resources_to_register.values())},
tag="resource_registry",
)
response = http_client.resource_registry({"resources": list(resources_to_register.values())})
cost_time = time.time() - start_time
res_data = response.json() if response.status_code == 200 else {}
skipped = res_data.get("data", {}).get("skipped", False)
if skipped:
logger.info(
f"[UniLab Register] 资源注册跳过(内容未变化)"
f" {len(resources_to_register)}{cost_time:.3f}s"
)
elif response.status_code in [200, 201]:
logger.info(f"[UniLab Register] 成功注册 {len(resources_to_register)} 个资源 {cost_time:.3f}s")
if response.status_code in [200, 201]:
logger.info(f"[UniLab Register] 成功注册 {len(resources_to_register)} 个资源 {cost_time}ms")
else:
logger.error(f"[UniLab Register] 资源注册失败: {response.status_code}, {response.text} {cost_time:.3f}s")
logger.error(f"[UniLab Register] 资源注册失败: {response.status_code}, {response.text} {cost_time}ms")
except Exception as e:
logger.error(f"[UniLab Register] 资源注册异常: {e}")
logger.info("[UniLab Register] 设备和资源注册完成.")

View File

@@ -1,176 +0,0 @@
"""
UniLabOS 应用工具函数
提供清理、重启等工具函数
"""
import glob
import os
import shutil
import sys
def patch_rclpy_dll_windows():
"""在 Windows + conda 环境下为 rclpy 打 DLL 加载补丁"""
if sys.platform != "win32" or not os.environ.get("CONDA_PREFIX"):
return
try:
import rclpy
return
except ImportError as e:
if not str(e).startswith("DLL load failed"):
return
cp = os.environ["CONDA_PREFIX"]
impl = os.path.join(cp, "Lib", "site-packages", "rclpy", "impl", "implementation_singleton.py")
pyd = glob.glob(os.path.join(cp, "Lib", "site-packages", "rclpy", "_rclpy_pybind11*.pyd"))
if not os.path.exists(impl) or not pyd:
return
with open(impl, "r", encoding="utf-8") as f:
content = f.read()
lib_bin = os.path.join(cp, "Library", "bin").replace("\\", "/")
patch = f'# UniLabOS DLL Patch\nimport os,ctypes\nos.add_dll_directory("{lib_bin}") if hasattr(os,"add_dll_directory") else None\ntry: ctypes.CDLL("{pyd[0].replace(chr(92),"/")}")\nexcept: pass\n# End Patch\n'
shutil.copy2(impl, impl + ".bak")
with open(impl, "w", encoding="utf-8") as f:
f.write(patch + content)
patch_rclpy_dll_windows()
import gc
import threading
import time
from unilabos.utils.banner_print import print_status
def cleanup_for_restart() -> bool:
"""
Clean up all resources for restart without exiting the process.
This function prepares the system for re-initialization by:
1. Stopping all communication clients
2. Destroying ROS nodes
3. Resetting singletons
4. Waiting for threads to finish
Returns:
bool: True if cleanup was successful, False otherwise
"""
print_status("[Restart] Starting cleanup for restart...", "info")
# Step 1: Stop WebSocket communication client
print_status("[Restart] Step 1: Stopping WebSocket client...", "info")
try:
from unilabos.app.communication import get_communication_client
comm_client = get_communication_client()
if comm_client is not None:
comm_client.stop()
print_status("[Restart] WebSocket client stopped", "info")
except Exception as e:
print_status(f"[Restart] Error stopping WebSocket: {e}", "warning")
# Step 2: Get HostNode and cleanup ROS
print_status("[Restart] Step 2: Cleaning up ROS nodes...", "info")
try:
from unilabos.ros.nodes.presets.host_node import HostNode
import rclpy
from rclpy.timer import Timer
host_instance = HostNode.get_instance(timeout=5)
if host_instance is not None:
print_status(f"[Restart] Found HostNode: {host_instance.device_id}", "info")
# Gracefully shutdown background threads
print_status("[Restart] Shutting down background threads...", "info")
HostNode.shutdown_background_threads(timeout=5.0)
print_status("[Restart] Background threads shutdown complete", "info")
# Stop discovery timer
if hasattr(host_instance, "_discovery_timer") and isinstance(host_instance._discovery_timer, Timer):
host_instance._discovery_timer.cancel()
print_status("[Restart] Discovery timer cancelled", "info")
# Destroy device nodes
device_count = len(host_instance.devices_instances)
print_status(f"[Restart] Destroying {device_count} device instances...", "info")
for device_id, device_node in list(host_instance.devices_instances.items()):
try:
if hasattr(device_node, "ros_node_instance") and device_node.ros_node_instance is not None:
device_node.ros_node_instance.destroy_node()
print_status(f"[Restart] Device {device_id} destroyed", "info")
except Exception as e:
print_status(f"[Restart] Error destroying device {device_id}: {e}", "warning")
# Clear devices instances
host_instance.devices_instances.clear()
host_instance.devices_names.clear()
# Destroy host node
try:
host_instance.destroy_node()
print_status("[Restart] HostNode destroyed", "info")
except Exception as e:
print_status(f"[Restart] Error destroying HostNode: {e}", "warning")
# Reset HostNode state
HostNode.reset_state()
print_status("[Restart] HostNode state reset", "info")
# Shutdown executor first (to stop executor.spin() gracefully)
if hasattr(rclpy, "__executor") and rclpy.__executor is not None:
try:
rclpy.__executor.shutdown()
rclpy.__executor = None # Clear for restart
print_status("[Restart] ROS executor shutdown complete", "info")
except Exception as e:
print_status(f"[Restart] Error shutting down executor: {e}", "warning")
# Shutdown rclpy
if rclpy.ok():
rclpy.shutdown()
print_status("[Restart] rclpy shutdown complete", "info")
except ImportError as e:
print_status(f"[Restart] ROS modules not available: {e}", "warning")
except Exception as e:
print_status(f"[Restart] Error in ROS cleanup: {e}", "warning")
return False
# Step 3: Reset communication client singleton
print_status("[Restart] Step 3: Resetting singletons...", "info")
try:
from unilabos.app import communication
if hasattr(communication, "_communication_client"):
communication._communication_client = None
print_status("[Restart] Communication client singleton reset", "info")
except Exception as e:
print_status(f"[Restart] Error resetting communication singleton: {e}", "warning")
# Step 4: Wait for threads to finish
print_status("[Restart] Step 4: Waiting for threads to finish...", "info")
time.sleep(3) # Give threads time to finish
# Check remaining threads
remaining_threads = []
for t in threading.enumerate():
if t.name != "MainThread" and t.is_alive():
remaining_threads.append(t.name)
if remaining_threads:
print_status(
f"[Restart] Warning: {len(remaining_threads)} threads still running: {remaining_threads}", "warning"
)
else:
print_status("[Restart] All threads stopped", "info")
# Step 5: Force garbage collection
print_status("[Restart] Step 5: Running garbage collection...", "info")
gc.collect()
gc.collect() # Run twice for weak references
print_status("[Restart] Garbage collection complete", "info")
print_status("[Restart] Cleanup complete. Ready for re-initialization.", "info")
return True

View File

@@ -1052,7 +1052,7 @@ async def handle_file_import(websocket: WebSocket, request_data: dict):
"result": {},
"schema": lab_registry._generate_unilab_json_command_schema(v["args"], k),
"goal_default": {i["name"]: i["default"] for i in v["args"]},
"handles": {},
"handles": [],
}
# 不生成已配置action的动作
for k, v in enhanced_info["action_methods"].items()
@@ -1340,5 +1340,5 @@ def setup_api_routes(app):
# 启动广播任务
@app.on_event("startup")
async def startup_event():
asyncio.create_task(broadcast_device_status(), name="web-api-startup-device")
asyncio.create_task(broadcast_status_page_data(), name="web-api-startup-status")
asyncio.create_task(broadcast_device_status())
asyncio.create_task(broadcast_status_page_data())

View File

@@ -3,15 +3,15 @@ HTTP客户端模块
提供与远程服务器通信的客户端功能只有host需要用
"""
import gzip
import json
import os
import time
from threading import Thread
from typing import List, Dict, Any, Optional
from unilabos.utils.tools import fast_dumps as _fast_dumps, fast_dumps_pretty as _fast_dumps_pretty
import requests
from unilabos.resources.resource_tracker import ResourceTreeSet
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
from unilabos.utils.log import info
from unilabos.config.config import HTTPConfig, BasicConfig
from unilabos.utils import logger
@@ -76,8 +76,7 @@ class HTTPClient:
Dict[str, str]: 旧UUID到新UUID的映射关系 {old_uuid: new_uuid}
"""
with open(os.path.join(BasicConfig.working_dir, "req_resource_tree_add.json"), "w", encoding="utf-8") as f:
payload = {"nodes": [x for xs in resources.dump() for x in xs], "mount_uuid": mount_uuid}
f.write(json.dumps(payload, indent=4))
f.write(json.dumps({"nodes": [x for xs in resources.dump() for x in xs], "mount_uuid": mount_uuid}, indent=4))
# 从序列化数据中提取所有节点的UUID保存旧UUID
old_uuids = {n.res_content.uuid: n for n in resources.all_nodes}
if not self.initialized or first_add:
@@ -282,54 +281,22 @@ class HTTPClient:
)
return response
def resource_registry(
self, registry_data: Dict[str, Any] | List[Dict[str, Any]], tag: str = "registry",
) -> requests.Response:
def resource_registry(self, registry_data: Dict[str, Any] | List[Dict[str, Any]]) -> requests.Response:
"""
注册资源到服务器,同步保存请求/响应到 unilabos_data
注册资源到服务器
Args:
registry_data: 注册表数据,格式为 {resource_id: resource_info} / [{resource_info}]
tag: 保存文件的标签后缀 (如 "device_registry" / "resource_registry")
Returns:
Response: API响应对象
"""
# 序列化一次,同时用于保存和发送
json_bytes = _fast_dumps(registry_data)
# 保存请求数据到 unilabos_data
req_path = os.path.join(BasicConfig.working_dir, f"req_{tag}_upload.json")
try:
os.makedirs(BasicConfig.working_dir, exist_ok=True)
with open(req_path, "wb") as f:
f.write(_fast_dumps_pretty(registry_data))
logger.trace(f"注册表请求数据已保存: {req_path}")
except Exception as e:
logger.warning(f"保存注册表请求数据失败: {e}")
compressed_body = gzip.compress(json_bytes)
headers = {
"Authorization": f"Lab {self.auth}",
"Content-Type": "application/json",
"Content-Encoding": "gzip",
}
response = requests.post(
f"{self.remote_addr}/lab/resource",
data=compressed_body,
headers=headers,
json=registry_data,
headers={"Authorization": f"Lab {self.auth}"},
timeout=30,
)
# 保存响应数据到 unilabos_data
res_path = os.path.join(BasicConfig.working_dir, f"res_{tag}_upload.json")
try:
with open(res_path, "w", encoding="utf-8") as f:
f.write(f"{response.status_code}\n{response.text}")
logger.trace(f"注册表响应数据已保存: {res_path}")
except Exception as e:
logger.warning(f"保存注册表响应数据失败: {e}")
if response.status_code not in [200, 201]:
logger.error(f"注册资源失败: {response.status_code}, {response.text}")
if response.status_code == 200:
@@ -368,106 +335,6 @@ class HTTPClient:
logger.error(f"响应内容: {response.text}")
return None
def workflow_import(
self,
name: str,
workflow_uuid: str,
workflow_name: str,
nodes: List[Dict[str, Any]],
edges: List[Dict[str, Any]],
tags: Optional[List[str]] = None,
published: bool = False,
description: str = "",
) -> Dict[str, Any]:
"""
导入工作流到服务器,如果 published 为 True则额外发起发布请求
Args:
name: 工作流名称(顶层)
workflow_uuid: 工作流UUID
workflow_name: 工作流名称data内部
nodes: 工作流节点列表
edges: 工作流边列表
tags: 工作流标签列表,默认为空列表
published: 是否发布工作流默认为False
description: 工作流描述,发布时使用
Returns:
Dict: API响应数据包含 code 和 data (uuid, name)
"""
payload = {
"name": name,
"data": {
"workflow_uuid": workflow_uuid,
"workflow_name": workflow_name,
"nodes": nodes,
"edges": edges,
"tags": tags if tags is not None else [],
},
}
# 保存请求到文件
with open(os.path.join(BasicConfig.working_dir, "req_workflow_upload.json"), "w", encoding="utf-8") as f:
f.write(json.dumps(payload, indent=4, ensure_ascii=False))
response = requests.post(
f"{self.remote_addr}/lab/workflow/owner/import",
json=payload,
headers={"Authorization": f"Lab {self.auth}"},
timeout=60,
)
# 保存响应到文件
with open(os.path.join(BasicConfig.working_dir, "res_workflow_upload.json"), "w", encoding="utf-8") as f:
f.write(f"{response.status_code}" + "\n" + response.text)
if response.status_code == 200:
res = response.json()
if "code" in res and res["code"] != 0:
logger.error(f"导入工作流失败: {response.text}")
return res
# 导入成功后,如果需要发布则额外发起发布请求
if published:
imported_uuid = res.get("data", {}).get("uuid", workflow_uuid)
publish_res = self.workflow_publish(imported_uuid, description)
res["publish_result"] = publish_res
return res
else:
logger.error(f"导入工作流失败: {response.status_code}, {response.text}")
return {"code": response.status_code, "message": response.text}
def workflow_publish(self, workflow_uuid: str, description: str = "") -> Dict[str, Any]:
"""
发布工作流
Args:
workflow_uuid: 工作流UUID
description: 工作流描述
Returns:
Dict: API响应数据
"""
payload = {
"uuid": workflow_uuid,
"description": description,
"published": True,
}
logger.info(f"正在发布工作流: {workflow_uuid}")
response = requests.patch(
f"{self.remote_addr}/lab/workflow/owner",
json=payload,
headers={"Authorization": f"Lab {self.auth}"},
timeout=60,
)
if response.status_code == 200:
res = response.json()
if "code" in res and res["code"] != 0:
logger.error(f"发布工作流失败: {response.text}")
else:
logger.info(f"工作流发布成功: {workflow_uuid}")
return res
else:
logger.error(f"发布工作流失败: {response.status_code}, {response.text}")
return {"code": response.status_code, "message": response.text}
# 创建默认客户端实例
http_client = HTTPClient()

View File

@@ -58,14 +58,14 @@ class JobResultStore:
feedback=feedback or {},
timestamp=time.time(),
)
logger.trace(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
logger.debug(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
def get_and_remove(self, job_id: str) -> Optional[JobResult]:
"""获取并删除任务结果"""
with self._results_lock:
result = self._results.pop(job_id, None)
if result:
logger.trace(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
logger.debug(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
return result
def get_result(self, job_id: str) -> Optional[JobResult]:
@@ -327,7 +327,6 @@ def job_add(req: JobAddReq) -> JobData:
queue_item,
action_type=action_type,
action_kwargs=action_args,
sample_material=req.sample_material,
server_info=server_info,
)

View File

@@ -6,6 +6,7 @@ Web服务器模块
import webbrowser
import uvicorn
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from starlette.responses import Response
@@ -86,7 +87,7 @@ def setup_server() -> FastAPI:
# 设置页面路由
try:
setup_web_pages(pages)
# info("[Web] 已加载Web UI模块")
info("[Web] 已加载Web UI模块")
except ImportError as e:
info(f"[Web] 未找到Web页面模块: {str(e)}")
except Exception as e:
@@ -95,7 +96,7 @@ def setup_server() -> FastAPI:
return app
def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = True) -> bool:
def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = True) -> None:
"""
启动服务器
@@ -103,14 +104,7 @@ def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = T
host: 服务器主机
port: 服务器端口
open_browser: 是否自动打开浏览器
Returns:
bool: True if restart was requested, False otherwise
"""
import threading
import time
from uvicorn import Config, Server
# 设置服务器
setup_server()
@@ -129,37 +123,7 @@ def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = T
# 启动服务器
info(f"[Web] 启动FastAPI服务器: {host}:{port}")
# 使用支持重启的模式
config = Config(app=app, host=host, port=port, log_config=log_config)
server = Server(config)
# 启动服务器线程
server_thread = threading.Thread(target=server.run, daemon=True, name="uvicorn_server")
server_thread.start()
# info("[Web] Server started, monitoring for restart requests...")
# 监控重启标志
import unilabos.app.main as main_module
while server_thread.is_alive():
if hasattr(main_module, "_restart_requested") and main_module._restart_requested:
info(
f"[Web] Restart requested via WebSocket, reason: {getattr(main_module, '_restart_reason', 'unknown')}"
)
main_module._restart_requested = False
# 停止服务器
server.should_exit = True
server_thread.join(timeout=5)
info("[Web] Server stopped, ready for restart")
return True
time.sleep(1)
return False
uvicorn.run(app, host=host, port=port, log_config=log_config)
# 当脚本直接运行时启动服务器

View File

@@ -23,10 +23,9 @@ from typing import Optional, Dict, Any, List
from urllib.parse import urlparse
from enum import Enum
from typing_extensions import TypedDict
from jedi.inference.gradual.typing import TypedDict
from unilabos.app.model import JobAddReq
from unilabos.resources.resource_tracker import ResourceDictType
from unilabos.ros.nodes.presets.host_node import HostNode
from unilabos.utils.type_check import serialize_result_info
from unilabos.app.communication import BaseCommunicationClient
@@ -77,7 +76,6 @@ class JobInfo:
start_time: float
last_update_time: float = field(default_factory=time.time)
ready_timeout: Optional[float] = None # READY状态的超时时间
always_free: bool = False # 是否为永久闲置动作(不受排队限制)
def update_timestamp(self):
"""更新最后更新时间"""
@@ -129,15 +127,6 @@ class DeviceActionManager:
# 总是将job添加到all_jobs中
self.all_jobs[job_info.job_id] = job_info
# always_free的动作不受排队限制直接设为READY
if job_info.always_free:
job_info.status = JobStatus.READY
job_info.update_timestamp()
job_info.set_ready_timeout(10)
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.trace(f"[DeviceActionManager] Job {job_log} always_free, start immediately")
return True
# 检查是否有正在执行或准备执行的任务
if device_key in self.active_jobs:
# 有正在执行或准备执行的任务,加入队列
@@ -165,7 +154,7 @@ class DeviceActionManager:
job_info.set_ready_timeout(10) # 设置10秒超时
self.active_jobs[device_key] = job_info
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.trace(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
logger.info(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
return True
def start_job(self, job_id: str) -> bool:
@@ -187,15 +176,11 @@ class DeviceActionManager:
logger.error(f"[DeviceActionManager] Job {job_log} is not in READY status, current: {job_info.status}")
return False
# always_free的job不需要检查active_jobs
if not job_info.always_free:
# 检查设备上是否是这个job
if device_key not in self.active_jobs or self.active_jobs[device_key].job_id != job_id:
job_log = format_job_log(
job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name
)
logger.error(f"[DeviceActionManager] Job {job_log} is not the active job for {device_key}")
return False
# 检查设备上是否是这个job
if device_key not in self.active_jobs or self.active_jobs[device_key].job_id != job_id:
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.error(f"[DeviceActionManager] Job {job_log} is not the active job for {device_key}")
return False
# 开始执行任务将状态从READY转换为STARTED
job_info.status = JobStatus.STARTED
@@ -218,13 +203,6 @@ class DeviceActionManager:
job_info = self.all_jobs[job_id]
device_key = job_info.device_action_key
# always_free的job直接清理不影响队列
if job_info.always_free:
job_info.status = JobStatus.ENDED
job_info.update_timestamp()
del self.all_jobs[job_id]
return None
# 移除活跃任务
if device_key in self.active_jobs and self.active_jobs[device_key].job_id == job_id:
del self.active_jobs[device_key]
@@ -232,9 +210,8 @@ class DeviceActionManager:
job_info.update_timestamp()
# 从all_jobs中移除已结束的job
del self.all_jobs[job_id]
# job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
# logger.debug(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
pass
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
else:
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.warning(f"[DeviceActionManager] Job {job_log} was not active for {device_key}")
@@ -250,20 +227,15 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
return next_job
return None
def get_active_jobs(self) -> List[JobInfo]:
"""获取所有正在执行的任务(含active_jobs和always_free的STARTED job)"""
"""获取所有正在执行的任务"""
with self.lock:
jobs = list(self.active_jobs.values())
# 补充 always_free 的 STARTED job(它们不在 active_jobs 中)
for job in self.all_jobs.values():
if job.always_free and job.status == JobStatus.STARTED and job not in jobs:
jobs.append(job)
return jobs
return list(self.active_jobs.values())
def get_queued_jobs(self) -> List[JobInfo]:
"""获取所有排队中的任务"""
@@ -288,14 +260,6 @@ class DeviceActionManager:
job_info = self.all_jobs[job_id]
device_key = job_info.device_action_key
# always_free的job直接清理
if job_info.always_free:
job_info.status = JobStatus.ENDED
del self.all_jobs[job_id]
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.trace(f"[DeviceActionManager] Always-free job {job_log} cancelled")
return True
# 如果是正在执行的任务
if device_key in self.active_jobs and self.active_jobs[device_key].job_id == job_id:
# 清理active job状态
@@ -304,7 +268,7 @@ class DeviceActionManager:
# 从all_jobs中移除
del self.all_jobs[job_id]
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.trace(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
logger.info(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
# 启动下一个任务
if device_key in self.device_queues and self.device_queues[device_key]:
@@ -317,7 +281,7 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
return True
# 如果是排队中的任务
@@ -331,7 +295,7 @@ class DeviceActionManager:
job_log = format_job_log(
job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name
)
logger.trace(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
logger.info(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
return True
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
@@ -369,18 +333,13 @@ class DeviceActionManager:
timeout_jobs = []
with self.lock:
# 收集所有需要检查的 READY 任务(active_jobs + always_free READY jobs)
ready_candidates = list(self.active_jobs.values())
for job in self.all_jobs.values():
if job.always_free and job.status == JobStatus.READY and job not in ready_candidates:
ready_candidates.append(job)
ready_jobs_count = sum(1 for job in ready_candidates if job.status == JobStatus.READY)
# 统计READY状态的任务数量
ready_jobs_count = sum(1 for job in self.active_jobs.values() if job.status == JobStatus.READY)
if ready_jobs_count > 0:
logger.trace(f"[DeviceActionManager] Checking {ready_jobs_count} READY jobs for timeout") # type: ignore # noqa: E501
# 找到所有超时的READY任务只检测不处理
for job_info in ready_candidates:
for job_info in self.active_jobs.values():
if job_info.is_ready_timeout():
timeout_jobs.append(job_info)
job_log = format_job_log(
@@ -400,7 +359,7 @@ class MessageProcessor:
self.device_manager = device_manager
self.queue_processor = None # 延迟设置
self.websocket_client = None # 延迟设置
self.session_id = str(uuid.uuid4())[:6] # 产生一个随机的session_id
self.session_id = ""
# WebSocket连接
self.websocket = None
@@ -409,7 +368,6 @@ class MessageProcessor:
# 线程控制
self.is_running = False
self.thread = None
self._loop = None # asyncio event loop引用用于外部关闭websocket
self.reconnect_count = 0
logger.info(f"[MessageProcessor] Initialized for URL: {websocket_url}")
@@ -436,31 +394,22 @@ class MessageProcessor:
def stop(self) -> None:
"""停止消息处理线程"""
self.is_running = False
# 主动关闭websocket以快速中断消息接收循环
ws = self.websocket
loop = self._loop
if ws and loop and loop.is_running():
try:
asyncio.run_coroutine_threadsafe(ws.close(), loop)
except Exception:
pass
if self.thread and self.thread.is_alive():
self.thread.join(timeout=2)
logger.info("[MessageProcessor] Stopped")
def _run(self):
"""运行消息处理主循环"""
self._loop = asyncio.new_event_loop()
loop = asyncio.new_event_loop()
try:
asyncio.set_event_loop(self._loop)
self._loop.run_until_complete(self._connection_handler())
asyncio.set_event_loop(loop)
loop.run_until_complete(self._connection_handler())
except Exception as e:
logger.error(f"[MessageProcessor] Thread error: {str(e)}")
logger.error(traceback.format_exc())
finally:
if self._loop:
self._loop.close()
self._loop = None
if loop:
loop.close()
async def _connection_handler(self):
"""处理WebSocket连接和重连逻辑"""
@@ -477,10 +426,8 @@ class MessageProcessor:
async with websockets.connect(
self.websocket_url,
ssl=ssl_context,
open_timeout=20,
ping_interval=WSConfig.ping_interval,
ping_timeout=10,
close_timeout=5,
additional_headers={
"Authorization": f"Lab {BasicConfig.auth_secret()}",
"EdgeSession": f"{self.session_id}",
@@ -491,98 +438,72 @@ class MessageProcessor:
self.connected = True
self.reconnect_count = 0
logger.info(f"[MessageProcessor] 已连接到 {self.websocket_url}")
logger.info(f"[MessageProcessor] Connected to {self.websocket_url}")
# 启动发送协程
send_task = asyncio.create_task(self._send_handler(), name="websocket-send_task")
# 每次连接(含重连)后重新向服务端注册,
# 否则服务端不知道客户端已上线,不会推送消息。
if self.websocket_client:
self.websocket_client.publish_host_ready()
send_task = asyncio.create_task(self._send_handler())
try:
# 接收消息循环
await self._message_handler()
finally:
# 必须在 async with __aexit__ 之前停止 send_task
# 否则 send_task 会在关闭握手期间继续发送数据,
# 干扰 websockets 库的内部清理,导致 task 泄漏。
self.connected = False
send_task.cancel()
try:
await send_task
except asyncio.CancelledError:
pass
self.connected = False
except websockets.exceptions.ConnectionClosed:
logger.warning("[MessageProcessor] 与服务端连接中断")
except TimeoutError:
logger.warning(
f"[MessageProcessor] 与服务端连接通信超时 (已尝试 {self.reconnect_count + 1} 次),请检查您的网络状况"
)
except websockets.exceptions.InvalidStatus as e:
logger.warning(
f"[MessageProcessor] 收到服务端注册码 {e.response.status_code}, 上一进程可能还未退出"
)
except Exception as e:
logger.error(traceback.format_exc())
logger.error(f"[MessageProcessor] 尝试重连时出错 {str(e)}")
finally:
logger.warning("[MessageProcessor] Connection closed")
self.connected = False
except Exception as e:
logger.error(f"[MessageProcessor] Connection error: {str(e)}")
logger.error(traceback.format_exc())
self.connected = False
finally:
self.websocket = None
# 重连逻辑
if not self.is_running:
break
if self.reconnect_count < WSConfig.max_reconnect_attempts:
if self.is_running and self.reconnect_count < WSConfig.max_reconnect_attempts:
self.reconnect_count += 1
backoff = WSConfig.reconnect_interval
logger.info(
f"[MessageProcessor] 即将在 {backoff} 秒后重连 (已尝试 {self.reconnect_count}/{WSConfig.max_reconnect_attempts})"
f"[MessageProcessor] Reconnecting in {WSConfig.reconnect_interval}s "
f"(attempt {self.reconnect_count}/{WSConfig.max_reconnect_attempts})"
)
await asyncio.sleep(backoff)
else:
await asyncio.sleep(WSConfig.reconnect_interval)
elif self.reconnect_count >= WSConfig.max_reconnect_attempts:
logger.error("[MessageProcessor] Max reconnection attempts reached")
break
else:
self.reconnect_count -= 1
async def _message_handler(self):
"""处理接收到的消息
ConnectionClosed 不在此处捕获,让其向上传播到 _connection_handler
以便 async with websockets.connect() 的 __aexit__ 能感知连接已断,
正确清理内部 task避免 task 泄漏。
"""
"""处理接收到的消息"""
if not self.websocket:
logger.error("[MessageProcessor] WebSocket connection is None")
return
async for message in self.websocket:
try:
data = json.loads(message)
message_type = data.get("action", "")
message_data = data.get("data")
if self.session_id and self.session_id == data.get("edge_session"):
await self._process_message(message_type, message_data)
else:
if message_type.endswith("_material"):
logger.trace(
f"[MessageProcessor] 收到一条归属 {data.get('edge_session')} 的旧消息:{data}"
)
logger.debug(
f"[MessageProcessor] 跳过了一条归属 {data.get('edge_session')} 的旧消息: {data.get('action')}"
)
else:
await self._process_message(message_type, message_data)
except json.JSONDecodeError:
logger.error(f"[MessageProcessor] Invalid JSON received: {message}")
except Exception as e:
logger.error(f"[MessageProcessor] Error processing message: {str(e)}")
logger.error(traceback.format_exc())
try:
async for message in self.websocket:
try:
data = json.loads(message)
await self._process_message(data)
except json.JSONDecodeError:
logger.error(f"[MessageProcessor] Invalid JSON received: {message}")
except Exception as e:
logger.error(f"[MessageProcessor] Error processing message: {str(e)}")
logger.error(traceback.format_exc())
except websockets.exceptions.ConnectionClosed:
logger.info("[MessageProcessor] Message handler stopped - connection closed")
except Exception as e:
logger.error(f"[MessageProcessor] Message handler error: {str(e)}")
logger.error(traceback.format_exc())
async def _send_handler(self):
"""处理发送队列中的消息"""
logger.trace("[MessageProcessor] Send handler started")
logger.debug("[MessageProcessor] Send handler started")
try:
while self.connected and self.websocket:
@@ -610,7 +531,7 @@ class MessageProcessor:
try:
message_str = json.dumps(msg, ensure_ascii=False)
await self.websocket.send(message_str)
# logger.trace(f"[MessageProcessor] Message sent: {msg.get('action', 'unknown')}") # type: ignore # noqa: E501
logger.trace(f"[MessageProcessor] Message sent: {msg.get('action', 'unknown')}") # type: ignore # noqa: E501
except Exception as e:
logger.error(f"[MessageProcessor] Failed to send message: {str(e)}")
logger.error(traceback.format_exc())
@@ -627,16 +548,18 @@ class MessageProcessor:
except asyncio.CancelledError:
logger.debug("[MessageProcessor] Send handler cancelled")
raise
except Exception as e:
logger.error(f"[MessageProcessor] Fatal error in send handler: {str(e)}")
logger.error(traceback.format_exc())
finally:
logger.debug("[MessageProcessor] Send handler stopped")
async def _process_message(self, message_type: str, message_data: Dict[str, Any]):
async def _process_message(self, data: Dict[str, Any]):
"""处理收到的消息"""
logger.trace(f"[MessageProcessor] Processing message: {message_type}")
message_type = data.get("action", "")
message_data = data.get("data")
logger.debug(f"[MessageProcessor] Processing message: {message_type}")
try:
if message_type == "pong":
@@ -648,23 +571,14 @@ class MessageProcessor:
elif message_type == "cancel_action" or message_type == "cancel_task":
await self._handle_cancel_action(message_data)
elif message_type == "add_material":
# noinspection PyTypeChecker
await self._handle_resource_tree_update(message_data, "add")
elif message_type == "update_material":
# noinspection PyTypeChecker
await self._handle_resource_tree_update(message_data, "update")
elif message_type == "remove_material":
# noinspection PyTypeChecker
await self._handle_resource_tree_update(message_data, "remove")
# elif message_type == "session_id":
# self.session_id = message_data.get("session_id")
# logger.info(f"[MessageProcessor] Session ID: {self.session_id}")
elif message_type == "add_device":
await self._handle_device_manage(message_data, "add")
elif message_type == "remove_device":
await self._handle_device_manage(message_data, "remove")
elif message_type == "request_restart":
await self._handle_request_restart(message_data)
elif message_type == "session_id":
self.session_id = message_data.get("session_id")
logger.info(f"[MessageProcessor] Session ID: {self.session_id}")
else:
logger.debug(f"[MessageProcessor] Unknown message type: {message_type}")
@@ -678,24 +592,6 @@ class MessageProcessor:
if host_node:
host_node.handle_pong_response(pong_data)
def _check_action_always_free(self, device_id: str, action_name: str) -> bool:
"""检查该action是否标记为always_free通过HostNode统一的_action_value_mappings查找"""
try:
host_node = HostNode.get_instance(0)
if not host_node:
return False
# noinspection PyProtectedMember
action_mappings = host_node._action_value_mappings.get(device_id)
if not action_mappings:
return False
# 尝试直接匹配或 auto- 前缀匹配
for key in [action_name, f"auto-{action_name}"]:
if key in action_mappings:
return action_mappings[key].get("always_free", False)
return False
except Exception:
return False
async def _handle_query_action_state(self, data: Dict[str, Any]):
"""处理query_action_state消息"""
device_id = data.get("device_id", "")
@@ -710,9 +606,6 @@ class MessageProcessor:
device_action_key = f"/devices/{device_id}/{action_name}"
# 检查action是否为always_free
action_always_free = self._check_action_always_free(device_id, action_name)
# 创建任务信息
job_info = JobInfo(
job_id=job_id,
@@ -722,7 +615,6 @@ class MessageProcessor:
device_action_key=device_action_key,
status=JobStatus.QUEUE,
start_time=time.time(),
always_free=action_always_free,
)
# 添加到设备管理器
@@ -734,13 +626,13 @@ class MessageProcessor:
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", True, 0
)
logger.trace(f"[MessageProcessor] Job {job_log} can start immediately")
logger.info(f"[MessageProcessor] Job {job_log} can start immediately")
else:
# 需要排队
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", False, 10
)
logger.trace(f"[MessageProcessor] Job {job_log} queued")
logger.info(f"[MessageProcessor] Job {job_log} queued")
# 通知QueueProcessor有新的队列更新
if self.queue_processor:
@@ -749,37 +641,9 @@ class MessageProcessor:
async def _handle_job_start(self, data: Dict[str, Any]):
"""处理job_start消息"""
try:
if not data.get("sample_material"):
data["sample_material"] = {}
req = JobAddReq(**data)
job_log = format_job_log(req.job_id, req.task_id, req.device_id, req.action)
# 服务端对always_free动作可能跳过query_action_state直接发job_start
# 此时job尚未注册需要自动补注册
existing_job = self.device_manager.get_job_info(req.job_id)
if not existing_job:
action_name = req.action
device_action_key = f"/devices/{req.device_id}/{action_name}"
action_always_free = self._check_action_always_free(req.device_id, action_name)
if action_always_free:
job_info = JobInfo(
job_id=req.job_id,
task_id=req.task_id,
device_id=req.device_id,
action_name=action_name,
device_action_key=device_action_key,
status=JobStatus.QUEUE,
start_time=time.time(),
always_free=True,
)
self.device_manager.add_queue_request(job_info)
logger.info(f"[MessageProcessor] Job {job_log} always_free, auto-registered from direct job_start")
else:
logger.error(f"[MessageProcessor] Job {job_log} not registered (missing query_action_state)")
return
success = self.device_manager.start_job(req.job_id)
if not success:
logger.error(f"[MessageProcessor] Failed to start job {job_log}")
@@ -808,7 +672,6 @@ class MessageProcessor:
queue_item,
action_type=req.action_type,
action_kwargs=req.action_args,
sample_material=req.sample_material,
server_info=req.server_info,
)
@@ -973,7 +836,9 @@ class MessageProcessor:
device_action_groups[key_add] = []
device_action_groups[key_add].append(item["uuid"])
logger.info(f"[资源同步] 跨站Transfer: {item['uuid'][:8]} from {device_old_id} to {device_id}")
logger.info(
f"[MessageProcessor] Resource migrated: {item['uuid'][:8]} from {device_old_id} to {device_id}"
)
else:
# 正常update
key = (device_id, "update")
@@ -987,13 +852,11 @@ class MessageProcessor:
device_action_groups[key] = []
device_action_groups[key].append(item["uuid"])
logger.trace(
f"[资源同步] 动作 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}"
)
logger.info(f"触发物料更新 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}")
# 为每个(device_id, action)创建独立的更新线程
for (device_id, actual_action), items in device_action_groups.items():
logger.trace(f"[资源同步] {device_id} 物料动作 {actual_action} 数量: {len(items)}")
logger.info(f"设备 {device_id} 物料更新 {actual_action} 数量: {len(items)}")
def _notify_resource_tree(dev_id, act, item_list):
try:
@@ -1025,81 +888,6 @@ class MessageProcessor:
)
thread.start()
async def _handle_device_manage(self, device_list: list[ResourceDictType], action: str):
"""Handle add_device / remove_device from LabGo server."""
if not device_list:
return
for item in device_list:
target_node_id = item.get("target_node_id", "host_node")
def _notify(target_id: str, act: str, cfg: ResourceDictType):
try:
host_node = HostNode.get_instance(timeout=5)
if not host_node:
logger.error(f"[DeviceManage] HostNode not available for {act}_device")
return
success = host_node.notify_device_manage(target_id, act, cfg)
if success:
logger.info(f"[DeviceManage] {act}_device completed on {target_id}")
else:
logger.warning(f"[DeviceManage] {act}_device failed on {target_id}")
except Exception as e:
logger.error(f"[DeviceManage] Error in {act}_device: {e}")
logger.error(traceback.format_exc())
thread = threading.Thread(
target=_notify,
args=(target_node_id, action, item),
daemon=True,
name=f"DeviceManage-{action}-{item.get('id', '')}",
)
thread.start()
async def _handle_request_restart(self, data: Dict[str, Any]):
"""
处理重启请求
当LabGo发送request_restart时执行清理并触发重启
"""
reason = data.get("reason", "unknown")
delay = data.get("delay", 2) # 默认延迟2秒
logger.info(f"[MessageProcessor] Received restart request, reason: {reason}, delay: {delay}s")
# 发送确认消息
self.send_message(
{"action": "restart_acknowledged", "data": {"reason": reason, "delay": delay}}
)
# 设置全局重启标志
import unilabos.app.main as main_module
main_module._restart_requested = True
main_module._restart_reason = reason
# 延迟后执行清理
await asyncio.sleep(delay)
# 在新线程中执行清理,避免阻塞当前事件循环
def do_cleanup():
import time
time.sleep(0.5) # 给当前消息处理完成的时间
logger.info(f"[MessageProcessor] Starting cleanup for restart, reason: {reason}")
try:
from unilabos.app.utils import cleanup_for_restart
if cleanup_for_restart():
logger.info("[MessageProcessor] Cleanup successful, main() will restart")
else:
logger.error("[MessageProcessor] Cleanup failed")
except Exception as e:
logger.error(f"[MessageProcessor] Error during cleanup: {e}")
cleanup_thread = threading.Thread(target=do_cleanup, name="RestartCleanupThread", daemon=True)
cleanup_thread.start()
logger.info(f"[MessageProcessor] Restart cleanup scheduled")
async def _send_action_state_response(
self, device_id: str, action_name: str, task_id: str, job_id: str, typ: str, free: bool, need_more: int
):
@@ -1171,14 +959,13 @@ class QueueProcessor:
def stop(self) -> None:
"""停止队列处理线程"""
self.is_running = False
self.queue_update_event.set() # 立即唤醒等待中的线程
if self.thread and self.thread.is_alive():
self.thread.join(timeout=2)
logger.info("[QueueProcessor] Stopped")
def _run(self):
"""运行队列处理主循环"""
logger.trace("[QueueProcessor] Queue processor started")
logger.debug("[QueueProcessor] Queue processor started")
while self.is_running:
try:
@@ -1272,11 +1059,6 @@ class QueueProcessor:
logger.debug(f"[QueueProcessor] Sending busy status for {len(queued_jobs)} queued jobs")
for job_info in queued_jobs:
# 快照可能已过期:在遍历过程中 end_job() 可能已将此 job 移至 READY
# 此时不应再发送 busy/need_more否则会覆盖已发出的 free=True 通知
if job_info.status != JobStatus.QUEUE:
continue
message = {
"action": "report_action_state",
"data": {
@@ -1292,7 +1074,7 @@ class QueueProcessor:
success = self.message_processor.send_message(message)
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
if success:
logger.trace(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
logger.debug(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
else:
logger.warning(f"[QueueProcessor] Failed to send busy status for job {job_log}")
@@ -1315,7 +1097,7 @@ class QueueProcessor:
job_info.action_name,
)
logger.trace(f"[QueueProcessor] Job {job_log} completed with status: {status}")
logger.info(f"[QueueProcessor] Job {job_log} completed with status: {status}")
# 结束任务,获取下一个可执行的任务
next_job = self.device_manager.end_job(job_id)
@@ -1335,8 +1117,8 @@ class QueueProcessor:
},
}
self.message_processor.send_message(message)
# next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
# logger.debug(f"[QueueProcessor] Notified next job {next_job_log} can start")
next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
logger.info(f"[QueueProcessor] Notified next job {next_job_log} can start")
# 立即触发下一轮状态检查
self.notify_queue_update()
@@ -1393,6 +1175,7 @@ class WebSocketClient(BaseCommunicationClient):
else:
url = f"{scheme}://{parsed.netloc}/api/v1/ws/schedule"
logger.debug(f"[WebSocketClient] URL: {url}")
return url
def start(self) -> None:
@@ -1405,11 +1188,13 @@ class WebSocketClient(BaseCommunicationClient):
logger.error("[WebSocketClient] WebSocket URL not configured")
return
logger.info(f"[WebSocketClient] Starting connection to {self.websocket_url}")
# 启动两个核心线程
self.message_processor.start()
self.queue_processor.start()
logger.trace("[WebSocketClient] All threads started")
logger.info("[WebSocketClient] All threads started")
def stop(self) -> None:
"""停止WebSocket客户端"""
@@ -1425,8 +1210,8 @@ class WebSocketClient(BaseCommunicationClient):
message = {"action": "normal_exit", "data": {"session_id": session_id}}
self.message_processor.send_message(message)
logger.info(f"[WebSocketClient] Sent normal_exit message with session_id: {session_id}")
# send_handler 每100ms检查一次队列等300ms足以让消息发
time.sleep(0.3)
# 给一点时间让消息发送出去
time.sleep(1)
except Exception as e:
logger.warning(f"[WebSocketClient] Failed to send normal_exit message: {str(e)}")
@@ -1458,7 +1243,7 @@ class WebSocketClient(BaseCommunicationClient):
},
}
self.message_processor.send_message(message)
# logger.trace(f"[WebSocketClient] Device status published: {device_id}.{property_name}")
logger.debug(f"[WebSocketClient] Device status published: {device_id}.{property_name}")
def publish_job_status(
self, feedback_data: dict, item: QueueItem, status: str, return_info: Optional[dict] = None
@@ -1478,7 +1263,7 @@ class WebSocketClient(BaseCommunicationClient):
except (KeyError, AttributeError):
logger.warning(f"[WebSocketClient] Failed to remove job {item.job_id} from HostNode status")
# logger.debug(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
logger.info(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
# 通知队列处理器job完成包括timeout的job
self.queue_processor.handle_job_completed(item.job_id, status)
@@ -1500,7 +1285,7 @@ class WebSocketClient(BaseCommunicationClient):
self.message_processor.send_message(message)
job_log = format_job_log(item.job_id, item.task_id, item.device_id, item.action_name)
logger.trace(f"[WebSocketClient] Job status published: {job_log} - {status}")
logger.debug(f"[WebSocketClient] Job status published: {job_log} - {status}")
def send_ping(self, ping_id: str, timestamp: float) -> None:
"""发送ping消息"""
@@ -1531,59 +1316,17 @@ class WebSocketClient(BaseCommunicationClient):
logger.warning(f"[WebSocketClient] Failed to cancel job {job_log}")
def publish_host_ready(self) -> None:
"""发布host_node ready信号,包含设备和动作信息"""
"""发布host_node ready信号"""
if self.is_disabled or not self.is_connected():
logger.debug("[WebSocketClient] Not connected, cannot publish host ready signal")
return
# 收集设备信息
devices = []
machine_name = BasicConfig.machine_name
try:
host_node = HostNode.get_instance(0)
if host_node:
# 获取设备信息
for device_id, namespace in host_node.devices_names.items():
device_key = (
f"{namespace}/{device_id}" if namespace.startswith("/") else f"/{namespace}/{device_id}"
)
is_online = device_key in host_node._online_devices
# 获取设备的动作信息
actions = {}
for action_id, client in host_node._action_clients.items():
# action_id 格式: /namespace/device_id/action_name
if device_id in action_id:
action_name = action_id.split("/")[-1]
actions[action_name] = {
"action_path": action_id,
"action_type": str(type(client).__name__),
}
devices.append(
{
"device_id": device_id,
"namespace": namespace,
"device_key": device_key,
"is_online": is_online,
"machine_name": host_node.device_machine_names.get(device_id, machine_name),
"actions": actions,
}
)
logger.info(f"[WebSocketClient] Collected {len(devices)} devices for host_ready")
except Exception as e:
logger.warning(f"[WebSocketClient] Error collecting device info: {e}")
message = {
"action": "host_node_ready",
"data": {
"status": "ready",
"timestamp": time.time(),
"machine_name": machine_name,
"devices": devices,
},
}
self.message_processor.send_message(message)
logger.info(f"[WebSocketClient] Host node ready signal published with {len(devices)} devices")
logger.info("[WebSocketClient] Host node ready signal published")

View File

@@ -5,7 +5,6 @@ from .separate_protocol import generate_separate_protocol
from .evaporate_protocol import generate_evaporate_protocol
from .evacuateandrefill_protocol import generate_evacuateandrefill_protocol
from .agv_transfer_protocol import generate_agv_transfer_protocol
from .batch_transfer_protocol import generate_batch_transfer_protocol
from .add_protocol import generate_add_protocol
from .centrifuge_protocol import generate_centrifuge_protocol
from .filter_protocol import generate_filter_protocol
@@ -32,7 +31,6 @@ from .hydrogenate_protocol import generate_hydrogenate_protocol
action_protocol_generators = {
AddProtocol: generate_add_protocol,
AGVTransferProtocol: generate_agv_transfer_protocol,
BatchTransferProtocol: generate_batch_transfer_protocol,
AdjustPHProtocol: generate_adjust_ph_protocol,
CentrifugeProtocol: generate_centrifuge_protocol,
CleanProtocol: generate_clean_protocol,

View File

@@ -1,127 +0,0 @@
"""
AGV 编译器共用工具函数
从 physical_setup_graph 中发现 AGV 节点配置,
供 agv_transfer_protocol 和 batch_transfer_protocol 复用。
"""
from typing import Any, Dict, List, Optional
import networkx as nx
def find_agv_config(G: nx.Graph, agv_id: Optional[str] = None) -> Dict[str, Any]:
"""从设备图中发现 AGV 节点,返回其配置
查找策略:
1. 如果指定 agv_id直接读取该节点
2. 否则查找 class 为 "agv_transport_station" 的节点
3. 兜底查找 config 中包含 device_roles 的 workstation 节点
Returns:
{
"agv_id": str,
"device_roles": {"navigator": "...", "arm": "..."},
"route_table": {"A->B": {"nav_command": ..., "arm_pick": ..., "arm_place": ...}},
"capacity": int,
}
"""
if agv_id and agv_id in G.nodes:
node_data = G.nodes[agv_id]
config = _extract_config(node_data)
if config and "device_roles" in config:
return _build_agv_cfg(agv_id, config, G)
# 查找 agv_transport_station 类型
for nid, ndata in G.nodes(data=True):
node_class = _get_node_class(ndata)
if node_class == "agv_transport_station":
config = _extract_config(ndata)
return _build_agv_cfg(nid, config or {}, G)
# 兜底:查找带有 device_roles 的 workstation
for nid, ndata in G.nodes(data=True):
node_class = _get_node_class(ndata)
if node_class == "workstation":
config = _extract_config(ndata)
if config and "device_roles" in config:
return _build_agv_cfg(nid, config, G)
raise ValueError("设备图中未找到 AGV 节点(需 class=agv_transport_station 或 config.device_roles")
def get_agv_capacity(G: nx.Graph, agv_id: str) -> int:
"""从 AGV 的 Warehouse 子节点计算载具容量"""
for neighbor in G.successors(agv_id) if G.is_directed() else G.neighbors(agv_id):
ndata = G.nodes[neighbor]
node_type = _get_node_type(ndata)
if node_type == "warehouse":
config = _extract_config(ndata)
if config:
x = config.get("num_items_x", 1)
y = config.get("num_items_y", 1)
z = config.get("num_items_z", 1)
return x * y * z
# 如果没有 warehouse 子节点,尝试从配置中读取
return 0
def split_batches(items: list, capacity: int) -> List[list]:
"""按 AGV 容量分批
Args:
items: 待转运的物料列表
capacity: AGV 单批次容量
Returns:
分批后的列表的列表
"""
if capacity <= 0:
raise ValueError(f"AGV 容量必须 > 0当前: {capacity}")
return [items[i:i + capacity] for i in range(0, len(items), capacity)]
def _extract_config(node_data: dict) -> Optional[dict]:
"""从节点数据中提取 config 字段,兼容多种格式"""
# 直接 config 字段
config = node_data.get("config")
if isinstance(config, dict):
return config
# res_content 嵌套格式
res_content = node_data.get("res_content")
if hasattr(res_content, "config"):
return res_content.config if isinstance(res_content.config, dict) else None
if isinstance(res_content, dict):
return res_content.get("config")
return None
def _get_node_class(node_data: dict) -> str:
"""获取节点的 class 字段"""
res_content = node_data.get("res_content")
if hasattr(res_content, "model_dump"):
d = res_content.model_dump()
return d.get("class_", d.get("class", ""))
if isinstance(res_content, dict):
return res_content.get("class_", res_content.get("class", ""))
return node_data.get("class_", node_data.get("class", ""))
def _get_node_type(node_data: dict) -> str:
"""获取节点的 type 字段"""
res_content = node_data.get("res_content")
if hasattr(res_content, "type"):
return res_content.type or ""
if isinstance(res_content, dict):
return res_content.get("type", "")
return node_data.get("type", "")
def _build_agv_cfg(agv_id: str, config: dict, G: nx.Graph) -> Dict[str, Any]:
"""构建标准化的 AGV 配置"""
return {
"agv_id": agv_id,
"device_roles": config.get("device_roles", {}),
"route_table": config.get("route_table", {}),
"capacity": get_agv_capacity(G, agv_id),
}

View File

@@ -2,13 +2,20 @@ from functools import partial
import networkx as nx
import re
import logging
from typing import List, Dict, Any, Union
from .utils.unit_parser import parse_volume_input, parse_mass_input, parse_time_input
from .utils.vessel_parser import get_vessel, find_solid_dispenser, find_connected_stirrer, find_reagent_vessel
from .utils.logger_util import action_log, debug_print
from .utils.logger_util import action_log
from .pump_protocol import generate_pump_protocol_with_rinsing
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[ADD] {message}")
# 🆕 创建进度日志动作
create_action_log = partial(action_log, prefix="[ADD]")

View File

@@ -1,12 +1,14 @@
from functools import partial
import networkx as nx
import logging
from typing import List, Dict, Any, Union
from .utils.vessel_parser import get_vessel, find_connected_stirrer
from .utils.logger_util import action_log, debug_print
from .utils.vessel_parser import get_vessel
from .pump_protocol import generate_pump_protocol_with_rinsing
create_action_log = partial(action_log, prefix="[ADJUST_PH]")
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[ADJUST_PH] {message}")
def find_acid_base_vessel(G: nx.DiGraph, reagent: str) -> str:
"""
@@ -19,6 +21,8 @@ def find_acid_base_vessel(G: nx.DiGraph, reagent: str) -> str:
Returns:
str: 试剂容器ID
"""
debug_print(f"🔍 正在查找试剂 '{reagent}' 的容器...")
# 常见酸碱试剂的别名映射
reagent_aliases = {
"hydrochloric acid": ["HCl", "hydrochloric_acid", "hcl", "muriatic_acid"],
@@ -32,13 +36,17 @@ def find_acid_base_vessel(G: nx.DiGraph, reagent: str) -> str:
# 构建搜索名称列表
search_names = [reagent.lower()]
debug_print(f"📋 基础搜索名称: {reagent.lower()}")
# 添加别名
for base_name, aliases in reagent_aliases.items():
if reagent.lower() in base_name.lower() or base_name.lower() in reagent.lower():
search_names.extend([alias.lower() for alias in aliases])
debug_print(f"🔗 添加别名: {aliases}")
break
debug_print(f"📝 完整搜索列表: {search_names}")
# 构建可能的容器名称
possible_names = []
for name in search_names:
@@ -53,15 +61,17 @@ def find_acid_base_vessel(G: nx.DiGraph, reagent: str) -> str:
name_clean
])
debug_print(f"搜索容器: {len(possible_names)} 个候选名称")
debug_print(f"🎯 可能的容器名称 (前5个): {possible_names[:5]}... (共{len(possible_names)}个)")
# 第一步:通过容器名称匹配
debug_print(f"📋 方法1: 精确名称匹配...")
for vessel_name in possible_names:
if vessel_name in G.nodes():
debug_print(f"通过名称匹配找到容器: {vessel_name}")
debug_print(f"通过名称匹配找到容器: {vessel_name} 🎯")
return vessel_name
# 第二步:通过模糊匹配
debug_print(f"📋 方法2: 模糊名称匹配...")
for node_id in G.nodes():
if G.nodes[node_id].get('type') == 'container':
node_name = G.nodes[node_id].get('name', '').lower()
@@ -69,10 +79,11 @@ def find_acid_base_vessel(G: nx.DiGraph, reagent: str) -> str:
# 检查是否包含任何搜索名称
for search_name in search_names:
if search_name in node_id.lower() or search_name in node_name:
debug_print(f"通过模糊匹配找到容器: {node_id}")
debug_print(f"通过模糊匹配找到容器: {node_id} 🔍")
return node_id
# 第三步:通过液体类型匹配
debug_print(f"📋 方法3: 液体类型匹配...")
for node_id in G.nodes():
if G.nodes[node_id].get('type') == 'container':
vessel_data = G.nodes[node_id].get('data', {})
@@ -85,15 +96,56 @@ def find_acid_base_vessel(G: nx.DiGraph, reagent: str) -> str:
for search_name in search_names:
if search_name in liquid_type or search_name in reagent_name:
debug_print(f"通过液体类型匹配找到容器: {node_id}")
debug_print(f"通过液体类型匹配找到容器: {node_id} 💧")
return node_id
# 列出可用容器帮助调试
available_containers = [node_id for node_id in G.nodes()
if G.nodes[node_id].get('type') == 'container']
debug_print(f"所有匹配方法失败,可用容器: {available_containers}")
debug_print(f"📊 列出可用容器帮助调试...")
available_containers = []
for node_id in G.nodes():
if G.nodes[node_id].get('type') == 'container':
vessel_data = G.nodes[node_id].get('data', {})
liquids = vessel_data.get('liquid', [])
liquid_types = [liquid.get('liquid_type', '') or liquid.get('name', '')
for liquid in liquids if isinstance(liquid, dict)]
available_containers.append({
'id': node_id,
'name': G.nodes[node_id].get('name', ''),
'liquids': liquid_types,
'reagent_name': vessel_data.get('reagent_name', '')
})
debug_print(f"📋 可用容器列表:")
for container in available_containers:
debug_print(f" - 🧪 {container['id']}: {container['name']}")
debug_print(f" 💧 液体: {container['liquids']}")
debug_print(f" 🏷️ 试剂: {container['reagent_name']}")
debug_print(f"❌ 所有匹配方法都失败了")
raise ValueError(f"找不到试剂 '{reagent}' 对应的容器。尝试了: {possible_names[:10]}...")
def find_connected_stirrer(G: nx.DiGraph, vessel: str) -> str:
"""查找与容器相连的搅拌器"""
debug_print(f"🔍 查找连接到容器 '{vessel}' 的搅拌器...")
stirrer_nodes = [node for node in G.nodes()
if (G.nodes[node].get('class') or '') == 'virtual_stirrer']
debug_print(f"📊 发现 {len(stirrer_nodes)} 个搅拌器: {stirrer_nodes}")
for stirrer in stirrer_nodes:
if G.has_edge(stirrer, vessel) or G.has_edge(vessel, stirrer):
debug_print(f"✅ 找到连接的搅拌器: {stirrer} 🔗")
return stirrer
if stirrer_nodes:
debug_print(f"⚠️ 未找到直接连接的搅拌器,使用第一个: {stirrer_nodes[0]} 🔄")
return stirrer_nodes[0]
debug_print(f"❌ 未找到任何搅拌器")
return None
def calculate_reagent_volume(target_ph_value: float, reagent: str, vessel_volume: float = 100.0) -> float:
"""
估算需要的试剂体积来调节pH
@@ -106,30 +158,44 @@ def calculate_reagent_volume(target_ph_value: float, reagent: str, vessel_volume
Returns:
float: 估算的试剂体积 (mL)
"""
debug_print(f"计算试剂体积: pH={target_ph_value}, reagent={reagent}, vessel={vessel_volume}mL")
# 简化的pH调节体积估算
debug_print(f"🧮 计算试剂体积...")
debug_print(f" 📍 目标pH: {target_ph_value}")
debug_print(f" 🧪 试剂: {reagent}")
debug_print(f" 📏 容器体积: {vessel_volume}mL")
# 简化的pH调节体积估算实际应用中需要更精确的计算
if "acid" in reagent.lower() or "hcl" in reagent.lower():
debug_print(f"🍋 检测到酸性试剂")
# 酸性试剂pH越低需要的体积越大
if target_ph_value < 3:
volume = vessel_volume * 0.05
volume = vessel_volume * 0.05 # 5%
debug_print(f" 💪 强酸性 (pH<3): 使用 5% 体积")
elif target_ph_value < 5:
volume = vessel_volume * 0.02
volume = vessel_volume * 0.02 # 2%
debug_print(f" 🔸 中酸性 (pH<5): 使用 2% 体积")
else:
volume = vessel_volume * 0.01
volume = vessel_volume * 0.01 # 1%
debug_print(f" 🔹 弱酸性 (pH≥5): 使用 1% 体积")
elif "hydroxide" in reagent.lower() or "naoh" in reagent.lower():
debug_print(f"🧂 检测到碱性试剂")
# 碱性试剂pH越高需要的体积越大
if target_ph_value > 11:
volume = vessel_volume * 0.05
volume = vessel_volume * 0.05 # 5%
debug_print(f" 💪 强碱性 (pH>11): 使用 5% 体积")
elif target_ph_value > 9:
volume = vessel_volume * 0.02
volume = vessel_volume * 0.02 # 2%
debug_print(f" 🔸 中碱性 (pH>9): 使用 2% 体积")
else:
volume = vessel_volume * 0.01
volume = vessel_volume * 0.01 # 1%
debug_print(f" 🔹 弱碱性 (pH≤9): 使用 1% 体积")
else:
# 未知试剂,使用默认值
volume = vessel_volume * 0.01
debug_print(f"估算试剂体积: {volume:.2f}mL")
debug_print(f"❓ 未知试剂类型,使用默认 1% 体积")
debug_print(f"📊 计算结果: {volume:.2f}mL")
return volume
def generate_adjust_ph_protocol(
@@ -154,67 +220,96 @@ def generate_adjust_ph_protocol(
"""
vessel_id, vessel_data = get_vessel(vessel)
if not vessel_id:
debug_print(f"❌ vessel 参数无效必须包含id字段或直接提供容器ID. vessel: {vessel}")
raise ValueError("vessel 参数无效必须包含id字段或直接提供容器ID")
debug_print(f"pH调节协议: vessel={vessel_id}, ph={ph_value}, reagent='{reagent}'")
debug_print("=" * 60)
debug_print("🧪 开始生成pH调节协议")
debug_print(f"📋 原始参数:")
debug_print(f" 🥼 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 📊 ph_value: {ph_value}")
debug_print(f" 🧪 reagent: '{reagent}'")
debug_print(f" 📦 kwargs: {kwargs}")
debug_print("=" * 60)
action_sequence = []
# 从kwargs中获取可选参数
volume = kwargs.get('volume', 0.0)
stir = kwargs.get('stir', True)
stir_speed = kwargs.get('stir_speed', 300.0)
stir_time = kwargs.get('stir_time', 60.0)
settling_time = kwargs.get('settling_time', 30.0)
# 从kwargs中获取可选参数,如果没有则使用默认值
volume = kwargs.get('volume', 0.0) # 自动估算体积
stir = kwargs.get('stir', True) # 默认搅拌
stir_speed = kwargs.get('stir_speed', 300.0) # 默认搅拌速度
stir_time = kwargs.get('stir_time', 60.0) # 默认搅拌时间
settling_time = kwargs.get('settling_time', 30.0) # 默认平衡时间
debug_print(f"🔧 处理后的参数:")
debug_print(f" 📏 volume: {volume}mL (0.0表示自动估算)")
debug_print(f" 🌪️ stir: {stir}")
debug_print(f" 🔄 stir_speed: {stir_speed}rpm")
debug_print(f" ⏱️ stir_time: {stir_time}s")
debug_print(f" ⏳ settling_time: {settling_time}s")
# 开始处理
action_sequence.append(create_action_log(f"开始调节pH至 {ph_value}", "🧪"))
action_sequence.append(create_action_log(f"目标容器: {vessel_id}", "🥼"))
action_sequence.append(create_action_log(f"使用试剂: {reagent}", "⚗️"))
# 1. 验证目标容器存在
debug_print(f"🔍 步骤1: 验证目标容器...")
if vessel_id not in G.nodes():
debug_print(f"❌ 目标容器 '{vessel_id}' 不存在于系统中")
raise ValueError(f"目标容器 '{vessel_id}' 不存在于系统中")
debug_print(f"✅ 目标容器验证通过")
action_sequence.append(create_action_log("目标容器验证通过", ""))
# 2. 查找酸碱试剂容器
debug_print(f"🔍 步骤2: 查找试剂容器...")
action_sequence.append(create_action_log("正在查找试剂容器...", "🔍"))
try:
reagent_vessel = find_acid_base_vessel(G, reagent)
debug_print(f"✅ 找到试剂容器: {reagent_vessel}")
action_sequence.append(create_action_log(f"找到试剂容器: {reagent_vessel}", "🧪"))
except ValueError as e:
debug_print(f"❌ 无法找到试剂容器: {str(e)}")
action_sequence.append(create_action_log(f"试剂容器查找失败: {str(e)}", ""))
raise ValueError(f"无法找到试剂 '{reagent}': {str(e)}")
# 3. 体积估算
debug_print(f"🔍 步骤3: 体积处理...")
if volume <= 0:
action_sequence.append(create_action_log("开始自动估算试剂体积", "🧮"))
# 获取目标容器的体积信息
vessel_data = G.nodes[vessel_id].get('data', {})
vessel_volume = vessel_data.get('max_volume', 100.0)
vessel_volume = vessel_data.get('max_volume', 100.0) # 默认100mL
debug_print(f"📏 容器最大体积: {vessel_volume}mL")
estimated_volume = calculate_reagent_volume(ph_value, reagent, vessel_volume)
volume = estimated_volume
debug_print(f"✅ 自动估算试剂体积: {volume:.2f} mL")
action_sequence.append(create_action_log(f"估算试剂体积: {volume:.2f}mL", "📊"))
else:
debug_print(f"📏 使用指定体积: {volume}mL")
action_sequence.append(create_action_log(f"使用指定体积: {volume}mL", "📏"))
# 4. 验证路径存在
debug_print(f"🔍 步骤4: 路径验证...")
action_sequence.append(create_action_log("验证转移路径...", "🛤️"))
try:
path = nx.shortest_path(G, source=reagent_vessel, target=vessel_id)
action_sequence.append(create_action_log(f"找到转移路径: {' -> '.join(path)}", "🛤️"))
debug_print(f"找到路径: {' '.join(path)}")
action_sequence.append(create_action_log(f"找到转移路径: {''.join(path)}", "🛤️"))
except nx.NetworkXNoPath:
debug_print(f"❌ 无法找到转移路径")
action_sequence.append(create_action_log("转移路径不存在", ""))
raise ValueError(f"从试剂容器 '{reagent_vessel}' 到目标容器 '{vessel_id}' 没有可用路径")
# 5. 搅拌器设置
debug_print(f"🔍 步骤5: 搅拌器设置...")
stirrer_id = None
if stir:
action_sequence.append(create_action_log("准备启动搅拌器", "🌪️"))
@@ -223,6 +318,7 @@ def generate_adjust_ph_protocol(
stirrer_id = find_connected_stirrer(G, vessel_id)
if stirrer_id:
debug_print(f"✅ 找到搅拌器 {stirrer_id},启动搅拌")
action_sequence.append(create_action_log(f"启动搅拌器 {stirrer_id} (速度: {stir_speed}rpm)", "🔄"))
action_sequence.append({
@@ -242,18 +338,23 @@ def generate_adjust_ph_protocol(
"action_kwargs": {"time": 5}
})
else:
debug_print(f"⚠️ 未找到搅拌器,继续执行")
action_sequence.append(create_action_log("未找到搅拌器,跳过搅拌", "⚠️"))
except Exception as e:
debug_print(f"❌ 搅拌器配置出错: {str(e)}")
action_sequence.append(create_action_log(f"搅拌器配置失败: {str(e)}", ""))
else:
debug_print(f"📋 跳过搅拌设置")
action_sequence.append(create_action_log("跳过搅拌设置", "⏭️"))
# 6. 试剂添加
debug_print(f"🔍 步骤6: 试剂添加...")
action_sequence.append(create_action_log(f"开始添加试剂 {volume:.2f}mL", "🚰"))
# 计算添加时间pH调节需要缓慢添加
addition_time = max(30.0, volume * 2.0)
addition_time = max(30.0, volume * 2.0) # 至少30秒每mL需要2秒
debug_print(f"⏱️ 计算添加时间: {addition_time}s (缓慢注入)")
action_sequence.append(create_action_log(f"设置添加时间: {addition_time:.0f}s (缓慢注入)", "⏱️"))
try:
@@ -276,28 +377,35 @@ def generate_adjust_ph_protocol(
)
action_sequence.extend(pump_actions)
debug_print(f"✅ 泵协议生成完成,添加了 {len(pump_actions)} 个动作")
action_sequence.append(create_action_log(f"试剂转移完成 ({len(pump_actions)} 个操作)", ""))
# 体积运算 - 试剂添加成功后更新容器液体体积
# 🔧 修复体积运算 - 试剂添加成功后更新容器液体体积
debug_print(f"🔧 更新容器液体体积...")
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
debug_print(f"📊 添加前容器体积: {current_volume}")
# 处理不同的体积数据格式
if isinstance(current_volume, list):
if len(current_volume) > 0:
# 增加体积(添加试剂)
vessel["data"]["liquid_volume"][0] += volume
debug_print(f"📊 添加后容器体积: {vessel['data']['liquid_volume'][0]:.2f}mL (+{volume:.2f}mL)")
else:
# 如果列表为空,创建新的体积记录
vessel["data"]["liquid_volume"] = [volume]
debug_print(f"📊 初始化容器体积: {volume:.2f}mL")
elif isinstance(current_volume, (int, float)):
# 直接数值类型
vessel["data"]["liquid_volume"] += volume
debug_print(f"📊 添加后容器体积: {vessel['data']['liquid_volume']:.2f}mL (+{volume:.2f}mL)")
else:
debug_print(f"未知的体积数据格式: {type(current_volume)}")
debug_print(f"⚠️ 未知的体积数据格式: {type(current_volume)}")
# 创建新的体积记录
vessel["data"]["liquid_volume"] = volume
else:
debug_print(f"📊 容器无液体体积数据,创建新记录: {volume:.2f}mL")
# 确保vessel有data字段
if "data" not in vessel:
vessel["data"] = {}
@@ -315,16 +423,19 @@ def generate_adjust_ph_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = current_node_volume + volume
debug_print(f"✅ 图节点体积数据已更新")
action_sequence.append(create_action_log(f"容器体积已更新 (+{volume:.2f}mL)", "📊"))
except Exception as e:
debug_print(f"生成泵协议时出错: {str(e)}")
debug_print(f"生成泵协议时出错: {str(e)}")
action_sequence.append(create_action_log(f"泵协议生成失败: {str(e)}", ""))
raise ValueError(f"生成泵协议时出错: {str(e)}")
# 7. 混合搅拌
if stir and stirrer_id:
debug_print(f"🔍 步骤7: 混合搅拌...")
action_sequence.append(create_action_log(f"开始混合搅拌 {stir_time:.0f}s", "🌀"))
action_sequence.append({
@@ -337,10 +448,14 @@ def generate_adjust_ph_protocol(
"purpose": f"pH调节: 混合试剂目标pH={ph_value}"
}
})
debug_print(f"✅ 混合搅拌设置完成")
else:
debug_print(f"⏭️ 跳过混合搅拌")
action_sequence.append(create_action_log("跳过混合搅拌", "⏭️"))
# 8. 等待平衡
debug_print(f"🔍 步骤8: 反应平衡...")
action_sequence.append(create_action_log(f"等待pH平衡 {settling_time:.0f}s", "⚖️"))
action_sequence.append({
@@ -353,7 +468,17 @@ def generate_adjust_ph_protocol(
# 9. 完成总结
total_time = addition_time + stir_time + settling_time
debug_print(f"pH调节协议完成: {len(action_sequence)} 个动作, {total_time:.0f}s, {volume:.2f}mL {reagent}{vessel_id} pH {ph_value}")
debug_print("=" * 60)
debug_print(f"🎉 pH调节协议生成完成")
debug_print(f"📊 协议统计:")
debug_print(f" 📋 总动作数: {len(action_sequence)}")
debug_print(f" ⏱️ 预计总时间: {total_time:.0f}s ({total_time/60:.1f}分钟)")
debug_print(f" 🧪 试剂: {reagent}")
debug_print(f" 📏 体积: {volume:.2f}mL")
debug_print(f" 📊 目标pH: {ph_value}")
debug_print(f" 🥼 目标容器: {vessel_id}")
debug_print("=" * 60)
# 添加完成日志
summary_msg = f"pH调节协议完成: {vessel_id} → pH {ph_value} (使用 {volume:.2f}mL {reagent})"
@@ -385,18 +510,28 @@ def generate_adjust_ph_protocol_stepwise(
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id = vessel["id"]
debug_print(f"分步pH调节: vessel={vessel_id}, ph={ph_value}, reagent={reagent}, max_volume={max_volume}mL, steps={steps}")
debug_print("=" * 60)
debug_print(f"🔄 开始分步pH调节")
debug_print(f"📋 分步参数:")
debug_print(f" 🥼 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 📊 ph_value: {ph_value}")
debug_print(f" 🧪 reagent: {reagent}")
debug_print(f" 📏 max_volume: {max_volume}mL")
debug_print(f" 🔢 steps: {steps}")
debug_print("=" * 60)
action_sequence = []
# 每步添加的体积
step_volume = max_volume / steps
debug_print(f"📊 每步体积: {step_volume:.2f}mL")
action_sequence.append(create_action_log(f"开始分步pH调节 ({steps}步)", "🔄"))
action_sequence.append(create_action_log(f"每步添加: {step_volume:.2f}mL", "📏"))
for i in range(steps):
debug_print(f"🔄 执行第 {i+1}/{steps} 步,添加 {step_volume:.2f}mL")
action_sequence.append(create_action_log(f"{i+1}/{steps} 步开始", "🚀"))
# 生成单步协议
@@ -413,10 +548,12 @@ def generate_adjust_ph_protocol_stepwise(
)
action_sequence.extend(step_actions)
debug_print(f"✅ 第 {i+1}/{steps} 步完成,添加了 {len(step_actions)} 个动作")
action_sequence.append(create_action_log(f"{i+1}/{steps} 步完成", ""))
# 步骤间等待
if i < steps - 1:
debug_print(f"⏳ 步骤间等待30s")
action_sequence.append(create_action_log("步骤间等待...", ""))
action_sequence.append({
"action_name": "wait",
@@ -426,7 +563,7 @@ def generate_adjust_ph_protocol_stepwise(
}
})
debug_print(f"分步pH调节完成: {len(action_sequence)} 个动作")
debug_print(f"🎉 分步pH调节完成,共 {len(action_sequence)} 个动作")
action_sequence.append(create_action_log("分步pH调节全部完成", "🎉"))
return action_sequence
@@ -440,7 +577,7 @@ def generate_acidify_protocol(
) -> List[Dict[str, Any]]:
"""酸化协议"""
vessel_id = vessel["id"]
debug_print(f"酸化协议: {vessel_id} → pH {target_ph} ({acid})")
debug_print(f"🍋 生成酸化协议: {vessel_id} → pH {target_ph} (使用 {acid})")
return generate_adjust_ph_protocol(
G, vessel, target_ph, acid
)
@@ -453,7 +590,7 @@ def generate_basify_protocol(
) -> List[Dict[str, Any]]:
"""碱化协议"""
vessel_id = vessel["id"]
debug_print(f"碱化协议: {vessel_id} → pH {target_ph} ({base})")
debug_print(f"🧂 生成碱化协议: {vessel_id} → pH {target_ph} (使用 {base})")
return generate_adjust_ph_protocol(
G, vessel, target_ph, base
)
@@ -465,7 +602,7 @@ def generate_neutralize_protocol(
) -> List[Dict[str, Any]]:
"""中和协议pH=7"""
vessel_id = vessel["id"]
debug_print(f"中和协议: {vessel_id} → pH 7.0 ({reagent})")
debug_print(f"⚖️ 生成中和协议: {vessel_id} → pH 7.0 (使用 {reagent})")
return generate_adjust_ph_protocol(
G, vessel, 7.0, reagent
)
@@ -473,7 +610,10 @@ def generate_neutralize_protocol(
# 测试函数
def test_adjust_ph_protocol():
"""测试pH调节协议"""
debug_print("=== ADJUST PH PROTOCOL 增强版测试 ===")
# 测试体积计算
debug_print("🧮 测试体积计算...")
test_cases = [
(2.0, "hydrochloric acid", 100.0),
(4.0, "hydrochloric acid", 100.0),
@@ -481,12 +621,12 @@ def test_adjust_ph_protocol():
(10.0, "sodium hydroxide", 100.0),
(7.0, "unknown reagent", 100.0)
]
for ph, reagent, volume in test_cases:
result = calculate_reagent_volume(ph, reagent, volume)
debug_print(f"{reagent} → pH {ph}: {result:.2f}mL")
debug_print("测试完成")
debug_print(f"📊 {reagent} → pH {ph}: {result:.2f}mL")
debug_print("测试完成")
if __name__ == "__main__":
test_adjust_ph_protocol()

View File

@@ -1,12 +1,4 @@
"""
AGV 单物料转运编译器
从 physical_setup_graph 中查询 AGV 配置device_roles, route_table
不再硬编码 device_id 和路由表。
"""
import networkx as nx
from unilabos.compile._agv_utils import find_agv_config
def generate_agv_transfer_protocol(
@@ -25,32 +17,37 @@ def generate_agv_transfer_protocol(
from_repo_id = from_repo_["id"]
to_repo_id = to_repo_["id"]
# 从 G 中查询 AGV 配置
agv_cfg = find_agv_config(G)
device_roles = agv_cfg["device_roles"]
route_table = agv_cfg["route_table"]
wf_list = {
("AiChemEcoHiWo", "zhixing_agv"): {"nav_command" : '{"target" : "LM14"}',
"arm_command": '{"task_name" : "camera/250111_biaozhi.urp"}'},
("AiChemEcoHiWo", "AGV"): {"nav_command" : '{"target" : "LM14"}',
"arm_command": '{"task_name" : "camera/250111_biaozhi.urp"}'},
route_key = f"{from_repo_id}->{to_repo_id}"
if route_key not in route_table:
raise KeyError(f"AGV 路由表中未找到路线: {route_key},可用路线: {list(route_table.keys())}")
("zhixing_agv", "Revvity"): {"nav_command" : '{"target" : "LM13"}',
"arm_command": '{"task_name" : "camera/250111_put_board.urp"}'},
route = route_table[route_key]
nav_device = device_roles.get("navigator", device_roles.get("nav"))
arm_device = device_roles.get("arm")
("AGV", "Revvity"): {"nav_command" : '{"target" : "LM13"}',
"arm_command": '{"task_name" : "camera/250111_put_board.urp"}'},
("Revvity", "HPLC"): {"nav_command": '{"target" : "LM13"}',
"arm_command": '{"task_name" : "camera/250111_hplc.urp"}'},
("HPLC", "Revvity"): {"nav_command": '{"target" : "LM13"}',
"arm_command": '{"task_name" : "camera/250111_lfp.urp"}'},
}
return [
{
"device_id": nav_device,
"device_id": "zhixing_agv",
"action_name": "send_nav_task",
"action_kwargs": {
"command": route["nav_command"]
"command": wf_list[(from_repo_id, to_repo_id)]["nav_command"]
}
},
{
"device_id": arm_device,
"device_id": "zhixing_ur_arm",
"action_name": "move_pos_task",
"action_kwargs": {
"command": route.get("arm_command", route.get("arm_place", ""))
"command": wf_list[(from_repo_id, to_repo_id)]["arm_command"]
}
}
]

View File

@@ -1,228 +0,0 @@
"""
批量物料转运编译器
将 BatchTransferProtocol 编译为多批次的 nav → pick × N → nav → place × N 动作序列。
自动按 AGV 容量分批,全程维护三方 children dict 的物料系统一致性。
"""
import copy
from typing import Any, Dict, List
import networkx as nx
from unilabos.compile._agv_utils import find_agv_config, split_batches
def generate_batch_transfer_protocol(
G: nx.Graph,
from_repo: dict,
to_repo: dict,
transfer_resources: list,
from_positions: list,
to_positions: list,
) -> List[Dict[str, Any]]:
"""编译批量转运协议为可执行的 action steps
Args:
G: 设备图 (physical_setup_graph)
from_repo: 来源工站资源 dict{station_id: {..., children: {...}}}
to_repo: 目标工站资源 dict含堆栈和位置信息
transfer_resources: 被转运的物料列表Resource dict
from_positions: 来源 slot 位置列表(与 transfer_resources 平行)
to_positions: 目标 slot 位置列表(与 transfer_resources 平行)
Returns:
action steps 列表ROS2WorkstationNode 按序执行
"""
if not transfer_resources:
return []
n = len(transfer_resources)
if len(from_positions) != n or len(to_positions) != n:
raise ValueError(
f"transfer_resources({n}), from_positions({len(from_positions)}), "
f"to_positions({len(to_positions)}) 长度不一致"
)
# 组合为内部 transfer_items 便于分批处理
transfer_items = []
for i in range(n):
res = transfer_resources[i] if isinstance(transfer_resources[i], dict) else {}
transfer_items.append({
"resource_id": res.get("id", res.get("name", "")),
"resource_uuid": res.get("sample_id", ""),
"from_position": from_positions[i],
"to_position": to_positions[i],
"resource": res,
})
# 查询 AGV 配置
agv_cfg = find_agv_config(G)
agv_id = agv_cfg["agv_id"]
device_roles = agv_cfg["device_roles"]
route_table = agv_cfg["route_table"]
capacity = agv_cfg["capacity"]
if capacity <= 0:
raise ValueError(f"AGV {agv_id} 容量为 0请检查 Warehouse 子节点配置")
nav_device = device_roles.get("navigator", device_roles.get("nav"))
arm_device = device_roles.get("arm")
if not nav_device or not arm_device:
raise ValueError(f"AGV {agv_id} device_roles 缺少 navigator 或 arm: {device_roles}")
from_repo_ = list(from_repo.values())[0]
to_repo_ = list(to_repo.values())[0]
from_station_id = from_repo_["id"]
to_station_id = to_repo_["id"]
# 查找路由
route_to_source = _find_route(route_table, agv_id, from_station_id)
route_to_target = _find_route(route_table, from_station_id, to_station_id)
# 构建 AGV carrier 的 children dict用于 compile 阶段状态追踪)
agv_carrier_children: Dict[str, Any] = {}
# 计算 slot 名称A01, A02, B01, ...
agv_slot_names = _get_agv_slot_names(G, agv_cfg)
# 分批
batches = split_batches(transfer_items, capacity)
steps: List[Dict[str, Any]] = []
for batch_idx, batch in enumerate(batches):
is_last_batch = (batch_idx == len(batches) - 1)
# 阶段 1: AGV 导航到来源工站
steps.append({
"device_id": nav_device,
"action_name": "send_nav_task",
"action_kwargs": {
"command": route_to_source.get("nav_command", "")
},
"_comment": f"批次{batch_idx + 1}/{len(batches)}: AGV 导航至来源 {from_station_id}"
})
# 阶段 2: 逐个 pick
for item_idx, item in enumerate(batch):
from_pos = item["from_position"]
slot = agv_slot_names[item_idx] if item_idx < len(agv_slot_names) else f"S{item_idx + 1}"
# compile 阶段更新 children dict
if from_pos in from_repo_.get("children", {}):
resource_data = from_repo_["children"].pop(from_pos)
resource_data["parent"] = agv_id
agv_carrier_children[slot] = resource_data
steps.append({
"device_id": arm_device,
"action_name": "move_pos_task",
"action_kwargs": {
"command": route_to_source.get("arm_pick", route_to_source.get("arm_command", ""))
},
"_transfer_meta": {
"phase": "pick",
"resource_uuid": item.get("resource_uuid", ""),
"resource_id": item.get("resource_id", ""),
"from_parent": from_station_id,
"from_position": from_pos,
"agv_slot": slot,
},
"_comment": f"Pick {item.get('resource_id', from_pos)} → AGV.{slot}"
})
# 阶段 3: AGV 导航到目标工站
steps.append({
"device_id": nav_device,
"action_name": "send_nav_task",
"action_kwargs": {
"command": route_to_target.get("nav_command", "")
},
"_comment": f"批次{batch_idx + 1}: AGV 导航至目标 {to_station_id}"
})
# 阶段 4: 逐个 place
for item_idx, item in enumerate(batch):
to_pos = item["to_position"]
slot = agv_slot_names[item_idx] if item_idx < len(agv_slot_names) else f"S{item_idx + 1}"
# compile 阶段更新 children dict
if slot in agv_carrier_children:
resource_data = agv_carrier_children.pop(slot)
resource_data["parent"] = to_repo_["id"]
to_repo_["children"][to_pos] = resource_data
steps.append({
"device_id": arm_device,
"action_name": "move_pos_task",
"action_kwargs": {
"command": route_to_target.get("arm_place", route_to_target.get("arm_command", ""))
},
"_transfer_meta": {
"phase": "place",
"resource_uuid": item.get("resource_uuid", ""),
"resource_id": item.get("resource_id", ""),
"to_parent": to_station_id,
"to_position": to_pos,
"agv_slot": slot,
},
"_comment": f"Place AGV.{slot}{to_station_id}.{to_pos}"
})
# 如果还有下一批AGV 需要返回来源取料
if not is_last_batch:
steps.append({
"device_id": nav_device,
"action_name": "send_nav_task",
"action_kwargs": {
"command": route_to_source.get("nav_command", "")
},
"_comment": f"AGV 返回来源 {from_station_id} 取下一批"
})
return steps
def _find_route(route_table: Dict[str, Any], from_id: str, to_id: str) -> Dict[str, str]:
"""在路由表中查找路线,支持 A->B 和 (A, B) 两种 key 格式"""
# 优先 "A->B" 格式
key = f"{from_id}->{to_id}"
if key in route_table:
return route_table[key]
# 兼容 tuple keyJSON 中以逗号分隔字符串表示)
tuple_key = f"({from_id}, {to_id})"
if tuple_key in route_table:
return route_table[tuple_key]
raise KeyError(f"路由表中未找到: {key},可用路线: {list(route_table.keys())}")
def _get_agv_slot_names(G: nx.Graph, agv_cfg: dict) -> List[str]:
"""从设备图中获取 AGV Warehouse 的 slot 名称列表"""
agv_id = agv_cfg["agv_id"]
neighbors = G.successors(agv_id) if G.is_directed() else G.neighbors(agv_id)
for neighbor in neighbors:
ndata = G.nodes[neighbor]
node_type = ndata.get("type", "")
res_content = ndata.get("res_content")
if hasattr(res_content, "type"):
node_type = res_content.type or node_type
elif isinstance(res_content, dict):
node_type = res_content.get("type", node_type)
if node_type == "warehouse":
config = ndata.get("config", {})
if hasattr(res_content, "config") and isinstance(res_content.config, dict):
config = res_content.config
elif isinstance(res_content, dict):
config = res_content.get("config", config)
num_x = config.get("num_items_x", 1)
num_y = config.get("num_items_y", 1)
num_z = config.get("num_items_z", 1)
# 与 warehouse_factory 一致的命名
letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
len_x = num_x if num_z == 1 else (num_y if num_x == 1 else num_x)
len_y = num_y if num_z == 1 else (num_z if num_x == 1 else num_z)
return [f"{letters[j]}{i + 1:02d}" for i in range(len_x) for j in range(len_y)]
# 兜底生成通用名称
capacity = agv_cfg.get("capacity", 4)
return [f"S{i + 1}" for i in range(capacity)]

View File

@@ -1,9 +1,7 @@
from typing import List, Dict, Any
import networkx as nx
from .utils.vessel_parser import get_vessel, find_solvent_vessel, find_connected_heatchill
from .utils.logger_util import debug_print
from .utils.vessel_parser import get_vessel, find_solvent_vessel
from .pump_protocol import generate_pump_protocol
from .utils.resource_helper import get_resource_liquid_volume
def find_solvent_vessel_by_any_match(G: nx.DiGraph, solvent: str) -> str:
@@ -19,23 +17,43 @@ def find_waste_vessel(G: nx.DiGraph) -> str:
"""
possible_waste_names = [
"waste_workup",
"flask_waste",
"flask_waste",
"bottle_waste",
"waste",
"waste_vessel",
"waste_container"
]
for waste_name in possible_waste_names:
if waste_name in G.nodes():
return waste_name
raise ValueError(f"未找到废液容器。尝试了以下名称: {possible_waste_names}")
def find_connected_heatchill(G: nx.DiGraph, vessel: str) -> str:
"""
查找与指定容器相连的加热冷却设备
"""
# 查找所有加热冷却设备节点
heatchill_nodes = [node for node in G.nodes()
if (G.nodes[node].get('class') or '') == 'virtual_heatchill']
# 检查哪个加热设备与目标容器相连(机械连接)
for heatchill in heatchill_nodes:
if G.has_edge(heatchill, vessel) or G.has_edge(vessel, heatchill):
return heatchill
# 如果没有直接连接,返回第一个可用的加热设备
if heatchill_nodes:
return heatchill_nodes[0]
return None # 没有加热设备也可以工作,只是不能加热
def generate_clean_vessel_protocol(
G: nx.DiGraph,
vessel: dict,
vessel: dict, # 🔧 修改:从字符串改为字典类型
solvent: str,
volume: float,
temp: float,
@@ -43,7 +61,7 @@ def generate_clean_vessel_protocol(
) -> List[Dict[str, Any]]:
"""
生成容器清洗操作的协议序列,复用 pump_protocol 的成熟算法
清洗流程:
1. 查找溶剂容器和废液容器
2. 如果需要加热,启动加热设备
@@ -52,50 +70,63 @@ def generate_clean_vessel_protocol(
b. (可选) 等待清洗作用时间
c. 使用 pump_protocol 将清洗液从目标容器转移到废液容器
4. 如果加热了,停止加热
Args:
G: 有向图,节点为设备和容器,边为流体管道
vessel: 要清洗的容器字典包含id字段
solvent: 用于清洗的溶剂名称
solvent: 用于清洗的溶剂名称
volume: 每次清洗使用的溶剂体积
temp: 清洗时的温度
repeats: 清洗操作的重复次数,默认为 1
Returns:
List[Dict[str, Any]]: 容器清洗操作的动作序列
Raises:
ValueError: 当找不到必要的容器或设备时抛出异常
Examples:
clean_protocol = generate_clean_vessel_protocol(G, {"id": "main_reactor"}, "water", 100.0, 60.0, 2)
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
action_sequence = []
debug_print(f"开始生成容器清洗协议: vessel={vessel_id}, solvent={solvent}, volume={volume}mL, temp={temp}°C, repeats={repeats}")
print(f"CLEAN_VESSEL: 开始生成容器清洗协议")
print(f" - 目标容器: {vessel} (ID: {vessel_id})")
print(f" - 清洗溶剂: {solvent}")
print(f" - 清洗体积: {volume} mL")
print(f" - 清洗温度: {temp}°C")
print(f" - 重复次数: {repeats}")
# 验证目标容器存在
if vessel_id not in G.nodes():
raise ValueError(f"目标容器 '{vessel_id}' 不存在于系统中")
# 查找溶剂容器
try:
solvent_vessel = find_solvent_vessel(G, solvent)
debug_print(f"找到溶剂容器: {solvent_vessel}")
print(f"CLEAN_VESSEL: 找到溶剂容器: {solvent_vessel}")
except ValueError as e:
raise ValueError(f"无法找到溶剂容器: {str(e)}")
# 查找废液容器
try:
waste_vessel = find_waste_vessel(G)
debug_print(f"找到废液容器: {waste_vessel}")
print(f"CLEAN_VESSEL: 找到废液容器: {waste_vessel}")
except ValueError as e:
raise ValueError(f"无法找到废液容器: {str(e)}")
# 查找加热设备(可选)
heatchill_id = find_connected_heatchill(G, vessel_id)
heatchill_id = find_connected_heatchill(G, vessel_id) # 🔧 使用 vessel_id
if heatchill_id:
debug_print(f"找到加热设备: {heatchill_id}")
print(f"CLEAN_VESSEL: 找到加热设备: {heatchill_id}")
else:
debug_print(f"未找到加热设备,将在室温下清洗")
# 记录清洗前的容器状态
print(f"CLEAN_VESSEL: 未找到加热设备,将在室温下清洗")
# 🔧 新增:记录清洗前的容器状态
print(f"CLEAN_VESSEL: 记录清洗前容器状态...")
original_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -103,69 +134,79 @@ def generate_clean_vessel_protocol(
original_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
original_liquid_volume = current_volume
print(f"CLEAN_VESSEL: 清洗前液体体积: {original_liquid_volume:.2f}mL")
# 第一步:如果需要加热且有加热设备,启动加热
if temp > 25.0 and heatchill_id:
debug_print(f"启动加热至 {temp}°C")
print(f"CLEAN_VESSEL: 启动加热至 {temp}°C")
heatchill_start_action = {
"device_id": heatchill_id,
"action_name": "heat_chill_start",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
"temp": temp,
"purpose": f"cleaning with {solvent}"
}
}
action_sequence.append(heatchill_start_action)
# 等待温度稳定
wait_action = {
"action_name": "wait",
"action_kwargs": {"time": 30}
"action_name": "wait",
"action_kwargs": {"time": 30} # 等待30秒让温度稳定
}
action_sequence.append(wait_action)
# 第二步:重复清洗操作
for repeat in range(repeats):
debug_print(f"执行第 {repeat + 1}/{repeats} 次清洗")
print(f"CLEAN_VESSEL: 执行第 {repeat + 1} 次清洗")
# 2a. 使用 pump_protocol 将溶剂转移到目标容器
print(f"CLEAN_VESSEL: 将 {volume} mL {solvent} 转移到 {vessel_id}")
try:
# 调用成熟的 pump_protocol 算法
add_solvent_actions = generate_pump_protocol(
G=G,
from_vessel=solvent_vessel,
to_vessel=vessel_id,
to_vessel=vessel_id, # 🔧 使用 vessel_id
volume=volume,
flowrate=2.5,
flowrate=2.5, # 适中的流速,避免飞溅
transfer_flowrate=2.5
)
action_sequence.extend(add_solvent_actions)
# 更新容器体积(添加清洗溶剂)
# 🔧 新增:更新容器体积(添加清洗溶剂)
print(f"CLEAN_VESSEL: 更新容器体积 - 添加清洗溶剂 {volume:.2f}mL")
if "data" not in vessel:
vessel["data"] = {}
if "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
if len(current_volume) > 0:
vessel["data"]["liquid_volume"][0] += volume
print(f"CLEAN_VESSEL: 添加溶剂后体积: {vessel['data']['liquid_volume'][0]:.2f}mL (+{volume:.2f}mL)")
else:
vessel["data"]["liquid_volume"] = [volume]
print(f"CLEAN_VESSEL: 初始化清洗体积: {volume:.2f}mL")
elif isinstance(current_volume, (int, float)):
vessel["data"]["liquid_volume"] += volume
print(f"CLEAN_VESSEL: 添加溶剂后体积: {vessel['data']['liquid_volume']:.2f}mL (+{volume:.2f}mL)")
else:
vessel["data"]["liquid_volume"] = volume
print(f"CLEAN_VESSEL: 重置体积为: {volume:.2f}mL")
else:
vessel["data"]["liquid_volume"] = volume
# 同时更新图中的容器数据
print(f"CLEAN_VESSEL: 创建新体积记录: {volume:.2f}mL")
# 🔧 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] += volume
@@ -173,48 +214,58 @@ def generate_clean_vessel_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = current_node_volume + volume
print(f"CLEAN_VESSEL: 图节点体积数据已更新")
except Exception as e:
raise ValueError(f"无法将溶剂转移到容器: {str(e)}")
# 2b. 等待清洗作用时间
cleaning_wait_time = 60 if temp > 50.0 else 30
# 2b. 等待清洗作用时间(让溶剂充分清洗容器)
cleaning_wait_time = 60 if temp > 50.0 else 30 # 高温下等待更久
print(f"CLEAN_VESSEL: 等待清洗作用 {cleaning_wait_time}")
wait_action = {
"action_name": "wait",
"action_name": "wait",
"action_kwargs": {"time": cleaning_wait_time}
}
action_sequence.append(wait_action)
# 2c. 使用 pump_protocol 将清洗液转移到废液容器
print(f"CLEAN_VESSEL: 将清洗液从 {vessel_id} 转移到废液容器")
try:
# 调用成熟的 pump_protocol 算法
remove_waste_actions = generate_pump_protocol(
G=G,
from_vessel=vessel_id,
from_vessel=vessel_id, # 🔧 使用 vessel_id
to_vessel=waste_vessel,
volume=volume,
flowrate=2.5,
flowrate=2.5, # 适中的流速
transfer_flowrate=2.5
)
action_sequence.extend(remove_waste_actions)
# 更新容器体积(移除清洗液)
# 🔧 新增:更新容器体积(移除清洗液)
print(f"CLEAN_VESSEL: 更新容器体积 - 移除清洗液 {volume:.2f}mL")
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
if len(current_volume) > 0:
vessel["data"]["liquid_volume"][0] = max(0.0, vessel["data"]["liquid_volume"][0] - volume)
print(f"CLEAN_VESSEL: 移除清洗液后体积: {vessel['data']['liquid_volume'][0]:.2f}mL (-{volume:.2f}mL)")
else:
vessel["data"]["liquid_volume"] = [0.0]
print(f"CLEAN_VESSEL: 重置体积为0mL")
elif isinstance(current_volume, (int, float)):
vessel["data"]["liquid_volume"] = max(0.0, current_volume - volume)
print(f"CLEAN_VESSEL: 移除清洗液后体积: {vessel['data']['liquid_volume']:.2f}mL (-{volume:.2f}mL)")
else:
vessel["data"]["liquid_volume"] = 0.0
# 同时更新图中的容器数据
print(f"CLEAN_VESSEL: 重置体积为0mL")
# 🔧 同时更新图中的容器数据
if vessel_id in G.nodes():
vessel_node_data = G.nodes[vessel_id].get('data', {})
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = max(0.0, current_node_volume[0] - volume)
@@ -222,30 +273,34 @@ def generate_clean_vessel_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [0.0]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = max(0.0, current_node_volume - volume)
print(f"CLEAN_VESSEL: 图节点体积数据已更新")
except Exception as e:
raise ValueError(f"无法将清洗液转移到废液容器: {str(e)}")
# 2d. 清洗循环间的短暂等待
if repeat < repeats - 1:
if repeat < repeats - 1: # 不是最后一次清洗
print(f"CLEAN_VESSEL: 清洗循环间等待")
wait_action = {
"action_name": "wait",
"action_name": "wait",
"action_kwargs": {"time": 10}
}
action_sequence.append(wait_action)
# 第三步:如果加热了,停止加热
if temp > 25.0 and heatchill_id:
print(f"CLEAN_VESSEL: 停止加热")
heatchill_stop_action = {
"device_id": heatchill_id,
"action_name": "heat_chill_stop",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
}
}
action_sequence.append(heatchill_stop_action)
# 清洗完成后的状态
# 🔧 新增:清洗完成后的状态报告
final_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -253,17 +308,20 @@ def generate_clean_vessel_protocol(
final_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
final_liquid_volume = current_volume
debug_print(f"清洗完成: {len(action_sequence)} 个动作, 体积 {original_liquid_volume:.2f} -> {final_liquid_volume:.2f}mL")
print(f"CLEAN_VESSEL: 清洗完成")
print(f" - 清洗前体积: {original_liquid_volume:.2f}mL")
print(f" - 清洗后体积: {final_liquid_volume:.2f}mL")
print(f" - 生成了 {len(action_sequence)} 个动作")
return action_sequence
# 便捷函数
# 便捷函数:常用清洗方案
def generate_quick_clean_protocol(
G: nx.DiGraph,
vessel: dict,
solvent: str = "water",
G: nx.DiGraph,
vessel: dict, # 🔧 修改:从字符串改为字典类型
solvent: str = "water",
volume: float = 100.0
) -> List[Dict[str, Any]]:
"""快速清洗:室温,单次清洗"""
@@ -271,9 +329,9 @@ def generate_quick_clean_protocol(
def generate_thorough_clean_protocol(
G: nx.DiGraph,
vessel: dict,
solvent: str = "water",
G: nx.DiGraph,
vessel: dict, # 🔧 修改:从字符串改为字典类型
solvent: str = "water",
volume: float = 150.0,
temp: float = 60.0
) -> List[Dict[str, Any]]:
@@ -282,13 +340,13 @@ def generate_thorough_clean_protocol(
def generate_organic_clean_protocol(
G: nx.DiGraph,
vessel: dict,
G: nx.DiGraph,
vessel: dict, # 🔧 修改:从字符串改为字典类型
volume: float = 100.0
) -> List[Dict[str, Any]]:
"""有机清洗:先用有机溶剂,再用水清洗"""
action_sequence = []
# 第一步:有机溶剂清洗
try:
organic_actions = generate_clean_vessel_protocol(
@@ -296,71 +354,96 @@ def generate_organic_clean_protocol(
)
action_sequence.extend(organic_actions)
except ValueError:
# 如果没有丙酮,尝试乙醇
try:
organic_actions = generate_clean_vessel_protocol(
G, vessel, "ethanol", volume, 25.0, 2
)
action_sequence.extend(organic_actions)
except ValueError:
debug_print("未找到有机溶剂,跳过有机清洗步骤")
print("警告:未找到有机溶剂,跳过有机清洗步骤")
# 第二步:水清洗
water_actions = generate_clean_vessel_protocol(
G, vessel, "water", volume, 25.0, 2
)
action_sequence.extend(water_actions)
return action_sequence
def get_vessel_liquid_volume(G: nx.DiGraph, vessel: str) -> float:
"""获取容器中的液体体积(修复版)"""
if vessel not in G.nodes():
return 0.0
vessel_data = G.nodes[vessel].get('data', {})
liquids = vessel_data.get('liquid', [])
total_volume = 0.0
for liquid in liquids:
if isinstance(liquid, dict):
# 支持两种格式:新格式 (name, volume) 和旧格式 (liquid_type, liquid_volume)
volume = liquid.get('volume') or liquid.get('liquid_volume', 0.0)
total_volume += volume
return total_volume
def get_vessel_liquid_types(G: nx.DiGraph, vessel: str) -> List[str]:
"""获取容器中所有液体的类型"""
if vessel not in G.nodes():
return []
vessel_data = G.nodes[vessel].get('data', {})
liquids = vessel_data.get('liquid', [])
liquid_types = []
for liquid in liquids:
if isinstance(liquid, dict):
# 支持两种格式的液体类型字段
liquid_type = liquid.get('liquid_type') or liquid.get('name', '')
if liquid_type:
liquid_types.append(liquid_type)
return liquid_types
def find_vessel_by_content(G: nx.DiGraph, content: str) -> List[str]:
"""
根据内容物查找所有匹配的容器
返回匹配容器的ID列表
"""
matching_vessels = []
for node_id in G.nodes():
if G.nodes[node_id].get('type') == 'container':
# 检查容器名称匹配
node_name = G.nodes[node_id].get('name', '').lower()
if content.lower() in node_id.lower() or content.lower() in node_name:
matching_vessels.append(node_id)
continue
# 检查液体类型匹配
vessel_data = G.nodes[node_id].get('data', {})
liquids = vessel_data.get('liquid', [])
config_data = G.nodes[node_id].get('config', {})
# 检查 reagent_name 和 config.reagent
reagent_name = vessel_data.get('reagent_name', '').lower()
config_reagent = config_data.get('reagent', '').lower()
if (content.lower() == reagent_name or
if (content.lower() == reagent_name or
content.lower() == config_reagent):
matching_vessels.append(node_id)
continue
# 检查液体列表
for liquid in liquids:
if isinstance(liquid, dict):
liquid_type = liquid.get('liquid_type') or liquid.get('name', '')
if liquid_type.lower() == content.lower():
matching_vessels.append(node_id)
break
return matching_vessels
return matching_vessels

View File

@@ -1,19 +1,402 @@
from functools import partial
import networkx as nx
import re
import logging
from typing import List, Dict, Any, Union
from .utils.logger_util import debug_print, action_log
from .utils.unit_parser import parse_volume_input, parse_mass_input, parse_time_input, parse_temperature_input
from .utils.vessel_parser import get_vessel, find_solvent_vessel, find_connected_heatchill, find_connected_stirrer, find_solid_dispenser
from .utils.vessel_parser import get_vessel
from .utils.logger_util import action_log
from .pump_protocol import generate_pump_protocol_with_rinsing
logger = logging.getLogger(__name__)
# 创建进度日志动作
def debug_print(message):
"""调试输出"""
logger.info(f"[DISSOLVE] {message}")
# 🆕 创建进度日志动作
create_action_log = partial(action_log, prefix="[DISSOLVE]")
def parse_volume_input(volume_input: Union[str, float]) -> float:
"""
解析体积输入,支持带单位的字符串
Args:
volume_input: 体积输入(如 "10 mL", "?", 10.0
Returns:
float: 体积(毫升)
"""
if isinstance(volume_input, (int, float)):
debug_print(f"📏 体积输入为数值: {volume_input}")
return float(volume_input)
if not volume_input or not str(volume_input).strip():
debug_print(f"⚠️ 体积输入为空返回0.0mL")
return 0.0
volume_str = str(volume_input).lower().strip()
debug_print(f"🔍 解析体积输入: '{volume_str}'")
# 处理未知体积
if volume_str in ['?', 'unknown', 'tbd', 'to be determined']:
default_volume = 50.0 # 默认50mL
debug_print(f"❓ 检测到未知体积,使用默认值: {default_volume}mL 🎯")
return default_volume
# 移除空格并提取数字和单位
volume_clean = re.sub(r'\s+', '', volume_str)
# 匹配数字和单位的正则表达式
match = re.match(r'([0-9]*\.?[0-9]+)\s*(ml|l|μl|ul|microliter|milliliter|liter)?', volume_clean)
if not match:
debug_print(f"❌ 无法解析体积: '{volume_str}'使用默认值50mL")
return 50.0
value = float(match.group(1))
unit = match.group(2) or 'ml' # 默认单位为毫升
# 转换为毫升
if unit in ['l', 'liter']:
volume = value * 1000.0 # L -> mL
debug_print(f"🔄 体积转换: {value}L → {volume}mL")
elif unit in ['μl', 'ul', 'microliter']:
volume = value / 1000.0 # μL -> mL
debug_print(f"🔄 体积转换: {value}μL → {volume}mL")
else: # ml, milliliter 或默认
volume = value # 已经是mL
debug_print(f"✅ 体积已为mL: {volume}mL")
return volume
def parse_mass_input(mass_input: Union[str, float]) -> float:
"""
解析质量输入,支持带单位的字符串
Args:
mass_input: 质量输入(如 "2.9 g", "?", 2.5
Returns:
float: 质量(克)
"""
if isinstance(mass_input, (int, float)):
debug_print(f"⚖️ 质量输入为数值: {mass_input}g")
return float(mass_input)
if not mass_input or not str(mass_input).strip():
debug_print(f"⚠️ 质量输入为空返回0.0g")
return 0.0
mass_str = str(mass_input).lower().strip()
debug_print(f"🔍 解析质量输入: '{mass_str}'")
# 处理未知质量
if mass_str in ['?', 'unknown', 'tbd', 'to be determined']:
default_mass = 1.0 # 默认1g
debug_print(f"❓ 检测到未知质量,使用默认值: {default_mass}g 🎯")
return default_mass
# 移除空格并提取数字和单位
mass_clean = re.sub(r'\s+', '', mass_str)
# 匹配数字和单位的正则表达式
match = re.match(r'([0-9]*\.?[0-9]+)\s*(g|mg|kg|gram|milligram|kilogram)?', mass_clean)
if not match:
debug_print(f"❌ 无法解析质量: '{mass_str}'返回0.0g")
return 0.0
value = float(match.group(1))
unit = match.group(2) or 'g' # 默认单位为克
# 转换为克
if unit in ['mg', 'milligram']:
mass = value / 1000.0 # mg -> g
debug_print(f"🔄 质量转换: {value}mg → {mass}g")
elif unit in ['kg', 'kilogram']:
mass = value * 1000.0 # kg -> g
debug_print(f"🔄 质量转换: {value}kg → {mass}g")
else: # g, gram 或默认
mass = value # 已经是g
debug_print(f"✅ 质量已为g: {mass}g")
return mass
def parse_time_input(time_input: Union[str, float]) -> float:
"""
解析时间输入,支持带单位的字符串
Args:
time_input: 时间输入(如 "30 min", "1 h", "?", 60.0
Returns:
float: 时间(秒)
"""
if isinstance(time_input, (int, float)):
debug_print(f"⏱️ 时间输入为数值: {time_input}")
return float(time_input)
if not time_input or not str(time_input).strip():
debug_print(f"⚠️ 时间输入为空返回0秒")
return 0.0
time_str = str(time_input).lower().strip()
debug_print(f"🔍 解析时间输入: '{time_str}'")
# 处理未知时间
if time_str in ['?', 'unknown', 'tbd']:
default_time = 600.0 # 默认10分钟
debug_print(f"❓ 检测到未知时间,使用默认值: {default_time}s (10分钟) ⏰")
return default_time
# 移除空格并提取数字和单位
time_clean = re.sub(r'\s+', '', time_str)
# 匹配数字和单位的正则表达式
match = re.match(r'([0-9]*\.?[0-9]+)\s*(s|sec|second|min|minute|h|hr|hour|d|day)?', time_clean)
if not match:
debug_print(f"❌ 无法解析时间: '{time_str}'返回0s")
return 0.0
value = float(match.group(1))
unit = match.group(2) or 's' # 默认单位为秒
# 转换为秒
if unit in ['min', 'minute']:
time_sec = value * 60.0 # min -> s
debug_print(f"🔄 时间转换: {value}分钟 → {time_sec}")
elif unit in ['h', 'hr', 'hour']:
time_sec = value * 3600.0 # h -> s
debug_print(f"🔄 时间转换: {value}小时 → {time_sec}")
elif unit in ['d', 'day']:
time_sec = value * 86400.0 # d -> s
debug_print(f"🔄 时间转换: {value}天 → {time_sec}")
else: # s, sec, second 或默认
time_sec = value # 已经是s
debug_print(f"✅ 时间已为秒: {time_sec}")
return time_sec
def parse_temperature_input(temp_input: Union[str, float]) -> float:
"""
解析温度输入,支持带单位的字符串
Args:
temp_input: 温度输入(如 "60 °C", "room temperature", "?", 25.0
Returns:
float: 温度(摄氏度)
"""
if isinstance(temp_input, (int, float)):
debug_print(f"🌡️ 温度输入为数值: {temp_input}°C")
return float(temp_input)
if not temp_input or not str(temp_input).strip():
debug_print(f"⚠️ 温度输入为空使用默认室温25°C")
return 25.0 # 默认室温
temp_str = str(temp_input).lower().strip()
debug_print(f"🔍 解析温度输入: '{temp_str}'")
# 处理特殊温度描述
temp_aliases = {
'room temperature': 25.0,
'rt': 25.0,
'ambient': 25.0,
'cold': 4.0,
'ice': 0.0,
'reflux': 80.0, # 默认回流温度
'?': 25.0,
'unknown': 25.0
}
if temp_str in temp_aliases:
result = temp_aliases[temp_str]
debug_print(f"🏷️ 温度别名解析: '{temp_str}'{result}°C")
return result
# 移除空格并提取数字和单位
temp_clean = re.sub(r'\s+', '', temp_str)
# 匹配数字和单位的正则表达式
match = re.match(r'([0-9]*\.?[0-9]+)\s*(°c|c|celsius|°f|f|fahrenheit|k|kelvin)?', temp_clean)
if not match:
debug_print(f"❌ 无法解析温度: '{temp_str}'使用默认值25°C")
return 25.0
value = float(match.group(1))
unit = match.group(2) or 'c' # 默认单位为摄氏度
# 转换为摄氏度
if unit in ['°f', 'f', 'fahrenheit']:
temp_c = (value - 32) * 5/9 # F -> C
debug_print(f"🔄 温度转换: {value}°F → {temp_c:.1f}°C")
elif unit in ['k', 'kelvin']:
temp_c = value - 273.15 # K -> C
debug_print(f"🔄 温度转换: {value}K → {temp_c:.1f}°C")
else: # °c, c, celsius 或默认
temp_c = value # 已经是C
debug_print(f"✅ 温度已为°C: {temp_c}°C")
return temp_c
def find_solvent_vessel(G: nx.DiGraph, solvent: str) -> str:
"""增强版溶剂容器查找,支持多种匹配模式"""
debug_print(f"🔍 开始查找溶剂 '{solvent}' 的容器...")
# 🔧 方法1直接搜索 data.reagent_name 和 config.reagent
debug_print(f"📋 方法1: 搜索reagent字段...")
for node in G.nodes():
node_data = G.nodes[node].get('data', {})
node_type = G.nodes[node].get('type', '')
config_data = G.nodes[node].get('config', {})
# 只搜索容器类型的节点
if node_type == 'container':
reagent_name = node_data.get('reagent_name', '').lower()
config_reagent = config_data.get('reagent', '').lower()
# 精确匹配
if reagent_name == solvent.lower() or config_reagent == solvent.lower():
debug_print(f"✅ 通过reagent字段精确匹配到容器: {node} 🎯")
return node
# 模糊匹配
if (solvent.lower() in reagent_name and reagent_name) or \
(solvent.lower() in config_reagent and config_reagent):
debug_print(f"✅ 通过reagent字段模糊匹配到容器: {node} 🔍")
return node
# 🔧 方法2常见的容器命名规则
debug_print(f"📋 方法2: 使用命名规则查找...")
solvent_clean = solvent.lower().replace(' ', '_').replace('-', '_')
possible_names = [
solvent_clean,
f"flask_{solvent_clean}",
f"bottle_{solvent_clean}",
f"vessel_{solvent_clean}",
f"{solvent_clean}_flask",
f"{solvent_clean}_bottle",
f"solvent_{solvent_clean}",
f"reagent_{solvent_clean}",
f"reagent_bottle_{solvent_clean}",
f"reagent_bottle_1", # 通用试剂瓶
f"reagent_bottle_2",
f"reagent_bottle_3"
]
debug_print(f"🔍 尝试的容器名称: {possible_names[:5]}... (共{len(possible_names)}个)")
for name in possible_names:
if name in G.nodes():
node_type = G.nodes[name].get('type', '')
if node_type == 'container':
debug_print(f"✅ 通过命名规则找到容器: {name} 📝")
return name
# 🔧 方法3节点名称模糊匹配
debug_print(f"📋 方法3: 节点名称模糊匹配...")
for node_id in G.nodes():
node_data = G.nodes[node_id]
if node_data.get('type') == 'container':
# 检查节点名称是否包含溶剂名称
if solvent_clean in node_id.lower():
debug_print(f"✅ 通过节点名称模糊匹配到容器: {node_id} 🔍")
return node_id
# 检查液体类型匹配
vessel_data = node_data.get('data', {})
liquids = vessel_data.get('liquid', [])
for liquid in liquids:
if isinstance(liquid, dict):
liquid_type = liquid.get('liquid_type') or liquid.get('name', '')
if liquid_type.lower() == solvent.lower():
debug_print(f"✅ 通过液体类型匹配到容器: {node_id} 💧")
return node_id
# 🔧 方法4使用第一个试剂瓶作为备选
debug_print(f"📋 方法4: 查找备选试剂瓶...")
for node_id in G.nodes():
node_data = G.nodes[node_id]
if (node_data.get('type') == 'container' and
('reagent' in node_id.lower() or 'bottle' in node_id.lower() or 'flask' in node_id.lower())):
debug_print(f"⚠️ 未找到专用容器,使用备选试剂瓶: {node_id} 🔄")
return node_id
debug_print(f"❌ 所有方法都失败了,无法找到容器!")
raise ValueError(f"找不到溶剂 '{solvent}' 对应的容器")
def find_connected_heatchill(G: nx.DiGraph, vessel: str) -> str:
"""查找连接到指定容器的加热搅拌器"""
debug_print(f"🔍 查找连接到容器 '{vessel}' 的加热搅拌器...")
heatchill_nodes = []
for node in G.nodes():
node_class = G.nodes[node].get('class', '').lower()
if 'heatchill' in node_class:
heatchill_nodes.append(node)
debug_print(f"📋 发现加热搅拌器: {node}")
debug_print(f"📊 共找到 {len(heatchill_nodes)} 个加热搅拌器")
# 查找连接到容器的加热器
for heatchill in heatchill_nodes:
if G.has_edge(heatchill, vessel) or G.has_edge(vessel, heatchill):
debug_print(f"✅ 找到连接的加热搅拌器: {heatchill} 🔗")
return heatchill
# 返回第一个加热器
if heatchill_nodes:
debug_print(f"⚠️ 未找到直接连接的加热搅拌器,使用第一个: {heatchill_nodes[0]} 🔄")
return heatchill_nodes[0]
debug_print(f"❌ 未找到任何加热搅拌器")
return ""
def find_connected_stirrer(G: nx.DiGraph, vessel: str) -> str:
"""查找连接到指定容器的搅拌器"""
debug_print(f"🔍 查找连接到容器 '{vessel}' 的搅拌器...")
stirrer_nodes = []
for node in G.nodes():
node_class = G.nodes[node].get('class', '').lower()
if 'stirrer' in node_class:
stirrer_nodes.append(node)
debug_print(f"📋 发现搅拌器: {node}")
debug_print(f"📊 共找到 {len(stirrer_nodes)} 个搅拌器")
# 查找连接到容器的搅拌器
for stirrer in stirrer_nodes:
if G.has_edge(stirrer, vessel) or G.has_edge(vessel, stirrer):
debug_print(f"✅ 找到连接的搅拌器: {stirrer} 🔗")
return stirrer
# 返回第一个搅拌器
if stirrer_nodes:
debug_print(f"⚠️ 未找到直接连接的搅拌器,使用第一个: {stirrer_nodes[0]} 🔄")
return stirrer_nodes[0]
debug_print(f"❌ 未找到任何搅拌器")
return ""
def find_solid_dispenser(G: nx.DiGraph) -> str:
"""查找固体加样器"""
debug_print(f"🔍 查找固体加样器...")
for node in G.nodes():
node_class = G.nodes[node].get('class', '').lower()
if 'solid_dispenser' in node_class or 'dispenser' in node_class:
debug_print(f"✅ 找到固体加样器: {node} 🥄")
return node
debug_print(f"❌ 未找到固体加样器")
return ""
def generate_dissolve_protocol(
G: nx.DiGraph,
vessel: dict, # 🔧 修改:从字符串改为字典类型
@@ -53,21 +436,43 @@ def generate_dissolve_protocol(
- mol: "0.12 mol", "16.2 mmol"
"""
# 从字典中提取容器ID
# 🔧 核心修改:从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
debug_print(f"溶解协议: vessel={vessel_id}, solvent='{solvent}', volume={volume}, "
f"mass={mass}, temp={temp}, time={time}")
debug_print("=" * 60)
debug_print("🧪 开始生成溶解协议")
debug_print(f"📋 原始参数:")
debug_print(f" 🥼 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 💧 solvent: '{solvent}'")
debug_print(f" 📏 volume: {volume} (类型: {type(volume)})")
debug_print(f" ⚖️ mass: {mass} (类型: {type(mass)})")
debug_print(f" 🌡️ temp: {temp} (类型: {type(temp)})")
debug_print(f" ⏱️ time: {time} (类型: {type(time)})")
debug_print(f" 🧪 reagent: '{reagent}'")
debug_print(f" 🧬 mol: '{mol}'")
debug_print(f" 🎯 event: '{event}'")
debug_print(f" 📦 kwargs: {kwargs}") # 显示额外参数
debug_print("=" * 60)
action_sequence = []
# === 参数验证 ===
debug_print("🔍 步骤1: 参数验证...")
action_sequence.append(create_action_log(f"开始溶解操作 - 容器: {vessel_id}", "🎬"))
if not vessel_id:
debug_print("❌ vessel 参数不能为空")
raise ValueError("vessel 参数不能为空")
if vessel_id not in G.nodes():
debug_print(f"❌ 容器 '{vessel_id}' 不存在于系统中")
raise ValueError(f"容器 '{vessel_id}' 不存在于系统中")
# 记录溶解前的容器状态
debug_print("✅ 基本参数验证通过")
action_sequence.append(create_action_log("参数验证通过", ""))
# 🔧 新增:记录溶解前的容器状态
debug_print("🔍 记录溶解前容器状态...")
original_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -75,16 +480,30 @@ def generate_dissolve_protocol(
original_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
original_liquid_volume = current_volume
# === 参数解析 ===
debug_print(f"📊 溶解前液体体积: {original_liquid_volume:.2f}mL")
# === 🔧 关键修复:参数解析 ===
debug_print("🔍 步骤2: 参数解析...")
action_sequence.append(create_action_log("正在解析溶解参数...", "🔍"))
# 解析各种参数为数值
final_volume = parse_volume_input(volume)
final_mass = parse_mass_input(mass)
final_temp = parse_temperature_input(temp)
final_time = parse_time_input(time)
debug_print(f"参数解析: vol={final_volume}mL, mass={final_mass}g, temp={final_temp}°C, time={final_time}s")
debug_print(f"📊 解析结果:")
debug_print(f" 📏 体积: {final_volume}mL")
debug_print(f" ⚖️ 质量: {final_mass}g")
debug_print(f" 🌡️ 温度: {final_temp}°C")
debug_print(f" ⏱️ 时间: {final_time}s")
debug_print(f" 🧪 试剂: '{reagent}'")
debug_print(f" 🧬 摩尔: '{mol}'")
debug_print(f" 🎯 事件: '{event}'")
# === 判断溶解类型 ===
debug_print("🔍 步骤3: 判断溶解类型...")
action_sequence.append(create_action_log("正在判断溶解类型...", "🔍"))
# 判断是固体溶解还是液体溶解
is_solid_dissolve = (final_mass > 0 or (mol and mol.strip() != "") or (reagent and reagent.strip() != ""))
@@ -96,31 +515,49 @@ def generate_dissolve_protocol(
final_volume = 50.0
if not solvent:
solvent = "water" # 默认溶剂
debug_print("未明确指定溶解参数默认为50mL水溶解")
debug_print("⚠️ 未明确指定溶解参数默认为50mL水溶解")
dissolve_type = "固体溶解" if is_solid_dissolve else "液体溶解"
debug_print(f"溶解类型: {dissolve_type}")
action_sequence.append(create_action_log(f"溶解类型: {dissolve_type}", "📋"))
dissolve_emoji = "🧂" if is_solid_dissolve else "💧"
debug_print(f"📋 溶解类型: {dissolve_type} {dissolve_emoji}")
action_sequence.append(create_action_log(f"确定溶解类型: {dissolve_type} {dissolve_emoji}", "📋"))
# === 查找设备 ===
debug_print("🔍 步骤4: 查找设备...")
action_sequence.append(create_action_log("正在查找相关设备...", "🔍"))
# 查找加热搅拌器
heatchill_id = find_connected_heatchill(G, vessel_id)
stirrer_id = find_connected_stirrer(G, vessel_id)
# 优先使用加热搅拌器,否则使用独立搅拌器
stir_device_id = heatchill_id or stirrer_id
debug_print(f"设备: heatchill='{heatchill_id}', stirrer='{stirrer_id}'")
if not stir_device_id:
debug_print(f"📊 设备映射:")
debug_print(f" 🔥 加热器: '{heatchill_id}'")
debug_print(f" 🌪️ 搅拌器: '{stirrer_id}'")
debug_print(f" 🎯 使用设备: '{stir_device_id}'")
if heatchill_id:
action_sequence.append(create_action_log(f"找到加热搅拌器: {heatchill_id}", "🔥"))
elif stirrer_id:
action_sequence.append(create_action_log(f"找到搅拌器: {stirrer_id}", "🌪️"))
else:
action_sequence.append(create_action_log("未找到搅拌设备,将跳过搅拌", "⚠️"))
# === 执行溶解流程 ===
debug_print("🔍 步骤5: 执行溶解流程...")
try:
# 启动加热搅拌(如果需要)
# 步骤5.1: 启动加热搅拌(如果需要)
if stir_device_id and (final_temp > 25.0 or final_time > 0 or stir_speed > 0):
debug_print(f"🔍 5.1: 启动加热搅拌,温度: {final_temp}°C")
action_sequence.append(create_action_log(f"准备加热搅拌 (目标温度: {final_temp}°C)", "🔥"))
if heatchill_id and (final_temp > 25.0 or final_time > 0):
# 使用加热搅拌器
action_sequence.append(create_action_log(f"启动加热搅拌器 {heatchill_id}", "🔥"))
heatchill_action = {
"device_id": heatchill_id,
@@ -136,6 +573,7 @@ def generate_dissolve_protocol(
# 等待温度稳定
if final_temp > 25.0:
wait_time = min(60, abs(final_temp - 25.0) * 1.5)
action_sequence.append(create_action_log(f"等待温度稳定 ({wait_time:.0f}秒)", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": wait_time}
@@ -143,6 +581,7 @@ def generate_dissolve_protocol(
elif stirrer_id:
# 使用独立搅拌器
action_sequence.append(create_action_log(f"启动搅拌器 {stirrer_id} (速度: {stir_speed}rpm)", "🌪️"))
stir_action = {
"device_id": stirrer_id,
@@ -154,8 +593,9 @@ def generate_dissolve_protocol(
}
}
action_sequence.append(stir_action)
# 等待搅拌稳定
action_sequence.append(create_action_log("等待搅拌稳定...", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 5}
@@ -163,8 +603,12 @@ def generate_dissolve_protocol(
if is_solid_dissolve:
# === 固体溶解路径 ===
debug_print(f"🔍 5.2: 使用固体溶解路径")
action_sequence.append(create_action_log("开始固体溶解流程", "🧂"))
solid_dispenser = find_solid_dispenser(G)
if solid_dispenser:
action_sequence.append(create_action_log(f"找到固体加样器: {solid_dispenser}", "🥄"))
# 固体加样
add_kwargs = {
@@ -176,27 +620,42 @@ def generate_dissolve_protocol(
if final_mass > 0:
add_kwargs["mass"] = str(final_mass)
action_sequence.append(create_action_log(f"准备添加固体: {final_mass}g", "⚖️"))
if mol and mol.strip():
add_kwargs["mol"] = mol
action_sequence.append(create_action_log(f"按摩尔数添加: {mol}", "🧬"))
action_sequence.append(create_action_log("开始固体加样操作", "🥄"))
action_sequence.append({
"device_id": solid_dispenser,
"action_name": "add_solid",
"action_kwargs": add_kwargs
})
# 固体溶解体积运算 - 固体本身不会显著增加体积
debug_print(f"✅ 固体加样完成")
action_sequence.append(create_action_log("固体加样完成", ""))
# 🔧 新增:固体溶解体积运算 - 固体本身不会显著增加体积,但可能有少量变化
debug_print(f"🔧 固体溶解 - 体积变化很小,主要是质量变化")
# 固体通常不会显著改变液体体积,这里只记录日志
action_sequence.append(create_action_log(f"固体已添加: {final_mass}g", "📊"))
else:
debug_print("未找到固体加样器,跳过固体添加")
debug_print("⚠️ 未找到固体加样器,跳过固体添加")
action_sequence.append(create_action_log("未找到固体加样器,无法添加固体", ""))
elif is_liquid_dissolve:
# === 液体溶解路径 ===
debug_print(f"🔍 5.3: 使用液体溶解路径")
action_sequence.append(create_action_log("开始液体溶解流程", "💧"))
# 查找溶剂容器
action_sequence.append(create_action_log("正在查找溶剂容器...", "🔍"))
try:
solvent_vessel = find_solvent_vessel(G, solvent)
action_sequence.append(create_action_log(f"找到溶剂容器: {solvent_vessel}", "🧪"))
except ValueError as e:
debug_print(f"溶剂容器查找失败: {str(e)},跳过溶剂添加")
debug_print(f"⚠️ {str(e)},跳过溶剂添加")
action_sequence.append(create_action_log(f"溶剂容器查找失败: {str(e)}", ""))
solvent_vessel = None
@@ -204,7 +663,10 @@ def generate_dissolve_protocol(
# 计算流速 - 溶解时通常用较慢的速度,避免飞溅
flowrate = 1.0 # 较慢的注入速度
transfer_flowrate = 0.5 # 较慢的转移速度
action_sequence.append(create_action_log(f"设置流速: {flowrate}mL/min (缓慢注入)", ""))
action_sequence.append(create_action_log(f"开始转移 {final_volume}mL {solvent}", "🚰"))
# 调用pump protocol
pump_actions = generate_pump_protocol_with_rinsing(
G=G,
@@ -226,9 +688,12 @@ def generate_dissolve_protocol(
**kwargs
)
action_sequence.extend(pump_actions)
# 液体溶解体积运算 - 添加溶剂后更新容器体积
debug_print(f"✅ 溶剂转移完成,添加了 {len(pump_actions)} 个动作")
action_sequence.append(create_action_log(f"溶剂转移完成 ({len(pump_actions)} 个操作)", ""))
# 🔧 新增:液体溶解体积运算 - 添加溶剂后更新容器体积
debug_print(f"🔧 更新容器液体体积 - 添加溶剂 {final_volume:.2f}mL")
# 确保vessel有data字段
if "data" not in vessel:
vessel["data"] = {}
@@ -238,14 +703,19 @@ def generate_dissolve_protocol(
if isinstance(current_volume, list):
if len(current_volume) > 0:
vessel["data"]["liquid_volume"][0] += final_volume
debug_print(f"📊 添加溶剂后体积: {vessel['data']['liquid_volume'][0]:.2f}mL (+{final_volume:.2f}mL)")
else:
vessel["data"]["liquid_volume"] = [final_volume]
debug_print(f"📊 初始化溶解体积: {final_volume:.2f}mL")
elif isinstance(current_volume, (int, float)):
vessel["data"]["liquid_volume"] += final_volume
debug_print(f"📊 添加溶剂后体积: {vessel['data']['liquid_volume']:.2f}mL (+{final_volume:.2f}mL)")
else:
vessel["data"]["liquid_volume"] = final_volume
debug_print(f"📊 重置体积为: {final_volume:.2f}mL")
else:
vessel["data"]["liquid_volume"] = final_volume
debug_print(f"📊 创建新体积记录: {final_volume:.2f}mL")
# 🔧 同时更新图中的容器数据
if vessel_id in G.nodes():
@@ -262,19 +732,27 @@ def generate_dissolve_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [final_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = current_node_volume + final_volume
debug_print(f"✅ 图节点体积数据已更新")
action_sequence.append(create_action_log(f"容器体积已更新 (+{final_volume:.2f}mL)", "📊"))
# 溶剂添加后等待
action_sequence.append(create_action_log("溶剂添加后短暂等待...", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 5}
})
# 等待溶解完成
# 步骤5.4: 等待溶解完成
if final_time > 0:
debug_print(f"🔍 5.4: 等待溶解完成 - {final_time}s")
wait_minutes = final_time / 60
action_sequence.append(create_action_log(f"开始溶解等待 ({wait_minutes:.1f}分钟)", ""))
if heatchill_id:
# 使用定时加热搅拌
action_sequence.append(create_action_log(f"使用加热搅拌器进行定时溶解", "🔥"))
dissolve_action = {
"device_id": heatchill_id,
@@ -292,6 +770,7 @@ def generate_dissolve_protocol(
elif stirrer_id:
# 使用定时搅拌
action_sequence.append(create_action_log(f"使用搅拌器进行定时溶解", "🌪️"))
stir_action = {
"device_id": stirrer_id,
@@ -308,6 +787,7 @@ def generate_dissolve_protocol(
else:
# 简单等待
action_sequence.append(create_action_log(f"简单等待溶解完成", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": final_time}
@@ -315,7 +795,9 @@ def generate_dissolve_protocol(
# 步骤5.5: 停止加热搅拌(如果需要)
if heatchill_id and final_time == 0 and final_temp > 25.0:
debug_print(f"🔍 5.5: 停止加热器")
action_sequence.append(create_action_log("停止加热搅拌器", "🛑"))
stop_action = {
"device_id": heatchill_id,
"action_name": "heat_chill_stop",
@@ -326,7 +808,7 @@ def generate_dissolve_protocol(
action_sequence.append(stop_action)
except Exception as e:
debug_print(f"溶解流程执行失败: {str(e)}")
debug_print(f"溶解流程执行失败: {str(e)}")
action_sequence.append(create_action_log(f"溶解流程失败: {str(e)}", ""))
# 添加错误日志
action_sequence.append({
@@ -347,8 +829,23 @@ def generate_dissolve_protocol(
final_liquid_volume = current_volume
# === 最终结果 ===
debug_print(f"溶解协议完成: {vessel_id}, 类型={dissolve_type}, "
f"动作数={len(action_sequence)}, 体积={original_liquid_volume:.2f}{final_liquid_volume:.2f}mL")
debug_print("=" * 60)
debug_print(f"🎉 溶解协议生成完成")
debug_print(f"📊 协议统计:")
debug_print(f" 📋 总动作数: {len(action_sequence)}")
debug_print(f" 🥼 容器: {vessel_id}")
debug_print(f" {dissolve_emoji} 溶解类型: {dissolve_type}")
if is_liquid_dissolve:
debug_print(f" 💧 溶剂: {solvent} ({final_volume}mL)")
if is_solid_dissolve:
debug_print(f" 🧪 试剂: {reagent}")
debug_print(f" ⚖️ 质量: {final_mass}g")
debug_print(f" 🧬 摩尔: {mol}")
debug_print(f" 🌡️ 温度: {final_temp}°C")
debug_print(f" ⏱️ 时间: {final_time}s")
debug_print(f" 📊 溶解前体积: {original_liquid_volume:.2f}mL")
debug_print(f" 📊 溶解后体积: {final_liquid_volume:.2f}mL")
debug_print("=" * 60)
# 添加完成日志
summary_msg = f"溶解协议完成: {vessel_id}"
@@ -357,7 +854,7 @@ def generate_dissolve_protocol(
if is_solid_dissolve:
summary_msg += f" (溶解 {final_mass}g {reagent})"
action_sequence.append(create_action_log(summary_msg, ""))
action_sequence.append(create_action_log(summary_msg, "🎉"))
return action_sequence
@@ -369,7 +866,7 @@ def dissolve_solid_by_mass(G: nx.DiGraph, vessel: dict, reagent: str, mass: Unio
temp: Union[str, float] = 25.0, time: Union[str, float] = "10 min") -> List[Dict[str, Any]]:
"""按质量溶解固体"""
vessel_id = vessel["id"]
debug_print(f"快速固体溶解: {reagent} ({mass}) → {vessel_id}")
debug_print(f"🧂 快速固体溶解: {reagent} ({mass}) → {vessel_id}")
return generate_dissolve_protocol(
G, vessel,
mass=mass,
@@ -382,7 +879,7 @@ def dissolve_solid_by_moles(G: nx.DiGraph, vessel: dict, reagent: str, mol: str,
temp: Union[str, float] = 25.0, time: Union[str, float] = "10 min") -> List[Dict[str, Any]]:
"""按摩尔数溶解固体"""
vessel_id = vessel["id"]
debug_print(f"按摩尔数溶解固体: {reagent} ({mol}) → {vessel_id}")
debug_print(f"🧬 按摩尔数溶解固体: {reagent} ({mol}) → {vessel_id}")
return generate_dissolve_protocol(
G, vessel,
mol=mol,
@@ -395,7 +892,7 @@ def dissolve_with_solvent(G: nx.DiGraph, vessel: dict, solvent: str, volume: Uni
temp: Union[str, float] = 25.0, time: Union[str, float] = "5 min") -> List[Dict[str, Any]]:
"""用溶剂溶解"""
vessel_id = vessel["id"]
debug_print(f"溶剂溶解: {solvent} ({volume}) → {vessel_id}")
debug_print(f"💧 溶剂溶解: {solvent} ({volume}) → {vessel_id}")
return generate_dissolve_protocol(
G, vessel,
solvent=solvent,
@@ -407,7 +904,7 @@ def dissolve_with_solvent(G: nx.DiGraph, vessel: dict, solvent: str, volume: Uni
def dissolve_at_room_temp(G: nx.DiGraph, vessel: dict, solvent: str, volume: Union[str, float]) -> List[Dict[str, Any]]:
"""室温溶解"""
vessel_id = vessel["id"]
debug_print(f"室温溶解: {solvent} ({volume}) → {vessel_id}")
debug_print(f"🌡️ 室温溶解: {solvent} ({volume}) → {vessel_id}")
return generate_dissolve_protocol(
G, vessel,
solvent=solvent,
@@ -420,7 +917,7 @@ def dissolve_with_heating(G: nx.DiGraph, vessel: dict, solvent: str, volume: Uni
temp: Union[str, float] = "60 °C", time: Union[str, float] = "15 min") -> List[Dict[str, Any]]:
"""加热溶解"""
vessel_id = vessel["id"]
debug_print(f"加热溶解: {solvent} ({volume}) → {vessel_id} @ {temp}")
debug_print(f"🔥 加热溶解: {solvent} ({volume}) → {vessel_id} @ {temp}")
return generate_dissolve_protocol(
G, vessel,
solvent=solvent,
@@ -432,31 +929,37 @@ def dissolve_with_heating(G: nx.DiGraph, vessel: dict, solvent: str, volume: Uni
# 测试函数
def test_dissolve_protocol():
"""测试溶解协议的各种参数解析"""
debug_print("=== DISSOLVE PROTOCOL 增强版测试 ===")
# 测试体积解析
debug_print("💧 测试体积解析...")
volumes = ["10 mL", "?", 10.0, "1 L", "500 μL"]
for vol in volumes:
result = parse_volume_input(vol)
debug_print(f"体积解析: {vol}{result}mL")
debug_print(f"📏 体积解析: {vol}{result}mL")
# 测试质量解析
debug_print("⚖️ 测试质量解析...")
masses = ["2.9 g", "?", 2.5, "500 mg"]
for mass in masses:
result = parse_mass_input(mass)
debug_print(f"质量解析: {mass}{result}g")
debug_print(f"⚖️ 质量解析: {mass}{result}g")
# 测试温度解析
debug_print("🌡️ 测试温度解析...")
temps = ["60 °C", "room temperature", "?", 25.0, "reflux"]
for temp in temps:
result = parse_temperature_input(temp)
debug_print(f"温度解析: {temp}{result}°C")
debug_print(f"🌡️ 温度解析: {temp}{result}°C")
# 测试时间解析
debug_print("⏱️ 测试时间解析...")
times = ["30 min", "1 h", "?", 60.0]
for time in times:
result = parse_time_input(time)
debug_print(f"时间解析: {time}{result}s")
debug_print("测试完成")
debug_print(f"⏱️ 时间解析: {time}{result}s")
debug_print("测试完成")
if __name__ == "__main__":
test_dissolve_protocol()

View File

@@ -1,40 +1,87 @@
import networkx as nx
from typing import List, Dict, Any
from .utils.vessel_parser import get_vessel, find_connected_heatchill
from .utils.logger_util import debug_print
from unilabos.compile.utils.vessel_parser import get_vessel
def find_connected_heater(G: nx.DiGraph, vessel: str) -> str:
"""
查找与容器相连的加热器
Args:
G: 网络图
vessel: 容器名称
Returns:
str: 加热器ID如果没有则返回None
"""
print(f"DRY: 正在查找与容器 '{vessel}' 相连的加热器...")
# 查找所有加热器节点
heater_nodes = [node for node in G.nodes()
if ('heater' in node.lower() or
'heat' in node.lower() or
G.nodes[node].get('class') == 'virtual_heatchill' or
G.nodes[node].get('type') == 'heater')]
print(f"DRY: 找到的加热器节点: {heater_nodes}")
# 检查是否有加热器与目标容器相连
for heater in heater_nodes:
if G.has_edge(heater, vessel) or G.has_edge(vessel, heater):
print(f"DRY: 找到与容器 '{vessel}' 相连的加热器: {heater}")
return heater
# 如果没有直接连接,查找距离最近的加热器
for heater in heater_nodes:
try:
path = nx.shortest_path(G, source=heater, target=vessel)
if len(path) <= 3: # 最多2个中间节点
print(f"DRY: 找到距离较近的加热器: {heater}, 路径: {''.join(path)}")
return heater
except nx.NetworkXNoPath:
continue
print(f"DRY: 未找到与容器 '{vessel}' 相连的加热器")
return None
def generate_dry_protocol(
G: nx.DiGraph,
vessel: dict,
compound: str = "",
**kwargs
vessel: dict, # 🔧 修改:从字符串改为字典类型
compound: str = "", # 🔧 修改:参数顺序调整,并设置默认值
**kwargs # 接收其他可能的参数但不使用
) -> List[Dict[str, Any]]:
"""
生成干燥协议序列
Args:
G: 有向图,节点为容器和设备
vessel: 目标容器字典从XDL传入
compound: 化合物名称从XDL传入可选
**kwargs: 其他可选参数,但不使用
Returns:
List[Dict[str, Any]]: 动作序列
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
action_sequence = []
# 默认参数
dry_temp = 60.0
dry_time = 3600.0
simulation_time = 60.0
debug_print(f"开始生成干燥协议: vessel={vessel_id}, compound={compound or '未指定'}, temp={dry_temp}°C")
# 记录干燥前的容器状态
dry_temp = 60.0 # 默认干燥温度 60°C
dry_time = 3600.0 # 默认干燥时间 1小时3600秒
simulation_time = 60.0 # 模拟时间 1分钟
print(f"🌡️ DRY: 开始生成干燥协议 ✨")
print(f" 🥽 vessel: {vessel} (ID: {vessel_id})")
print(f" 🧪 化合物: {compound or '未指定'}")
print(f" 🔥 干燥温度: {dry_temp}°C")
print(f" ⏰ 干燥时间: {dry_time/60:.0f} 分钟")
# 🔧 新增:记录干燥前的容器状态
print(f"🔍 记录干燥前容器状态...")
original_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -42,30 +89,39 @@ def generate_dry_protocol(
original_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
original_liquid_volume = current_volume
print(f"📊 干燥前液体体积: {original_liquid_volume:.2f}mL")
# 1. 验证目标容器存在
print(f"\n📋 步骤1: 验证目标容器 '{vessel_id}' 是否存在...")
if vessel_id not in G.nodes():
debug_print(f"容器 '{vessel_id}' 不存在于系统中,跳过干燥")
print(f"⚠️ DRY: 警告 - 容器 '{vessel_id}' 不存在于系统中,跳过干燥 😢")
return action_sequence
print(f"✅ 容器 '{vessel_id}' 验证通过!")
# 2. 查找相连的加热器
heater_id = find_connected_heatchill(G, vessel_id)
print(f"\n🔍 步骤2: 查找与容器相连的加热器...")
heater_id = find_connected_heater(G, vessel_id) # 🔧 使用 vessel_id
if heater_id is None:
debug_print(f"未找到与容器 '{vessel_id}' 相连的加热器,添加模拟干燥动作")
print(f"😭 DRY: 警告 - 未找到与容器 '{vessel_id}' 相连的加热器,跳过干燥")
print(f"🎭 添加模拟干燥动作...")
# 添加一个等待动作,表示干燥过程(模拟)
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
"time": 10.0,
"time": 10.0, # 模拟等待时间
"description": f"模拟干燥 {compound or '化合物'} (无加热器可用)"
}
})
# 模拟干燥的体积变化
# 🔧 新增:模拟干燥的体积变化(溶剂蒸发)
print(f"🔧 模拟干燥过程的体积减少...")
if original_liquid_volume > 0:
# 假设干燥过程中损失10%的体积(溶剂蒸发)
volume_loss = original_liquid_volume * 0.1
new_volume = max(0.0, original_liquid_volume - volume_loss)
# 更新vessel字典中的体积
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
@@ -77,14 +133,15 @@ def generate_dry_protocol(
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"]["liquid_volume"] = new_volume
# 🔧 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = new_volume
@@ -92,27 +149,33 @@ def generate_dry_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [new_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = new_volume
debug_print(f"模拟干燥体积变化: {original_liquid_volume:.2f}mL -> {new_volume:.2f}mL")
debug_print(f"协议生成完成,共 {len(action_sequence)} 个动作")
print(f"📊 模拟干燥体积变化: {original_liquid_volume:.2f}mL {new_volume:.2f}mL (-{volume_loss:.2f}mL)")
print(f"📄 DRY: 协议生成完成,共 {len(action_sequence)} 个动作 🎯")
return action_sequence
debug_print(f"找到加热器: {heater_id}")
print(f"🎉 找到加热器: {heater_id}!")
# 3. 启动加热器进行干燥
print(f"\n🚀 步骤3: 开始执行干燥流程...")
print(f"🔥 启动加热器 {heater_id} 进行干燥")
# 3.1 启动加热
print(f" ⚡ 动作1: 启动加热到 {dry_temp}°C...")
action_sequence.append({
"device_id": heater_id,
"action_name": "heat_chill_start",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
"temp": dry_temp,
"purpose": f"干燥 {compound or '化合物'}"
}
})
print(f" ✅ 加热器启动命令已添加 🔥")
# 3.2 等待温度稳定
print(f" ⏳ 动作2: 等待温度稳定...")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
@@ -120,27 +183,34 @@ def generate_dry_protocol(
"description": f"等待温度稳定到 {dry_temp}°C"
}
})
print(f" ✅ 温度稳定等待命令已添加 🌡️")
# 3.3 保持干燥温度
print(f" 🔄 动作3: 保持干燥温度 {simulation_time/60:.0f} 分钟...")
action_sequence.append({
"device_id": heater_id,
"action_name": "heat_chill",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
"temp": dry_temp,
"time": simulation_time,
"purpose": f"干燥 {compound or '化合物'},保持温度 {dry_temp}°C"
}
})
# 干燥过程中的体积变化计算
print(f" ✅ 温度保持命令已添加 🌡️⏰")
# 🔧 新增:干燥过程中的体积变化计算
print(f"🔧 计算干燥过程中的体积变化...")
if original_liquid_volume > 0:
evaporation_rate = 0.001 * dry_temp
total_evaporation = min(original_liquid_volume * 0.8,
evaporation_rate * simulation_time)
# 干燥过程中,溶剂会蒸发,固体保留
# 根据温度和时间估算蒸发量
evaporation_rate = 0.001 * dry_temp # 每秒每°C蒸发0.001mL
total_evaporation = min(original_liquid_volume * 0.8,
evaporation_rate * simulation_time) # 最多蒸发80%
new_volume = max(0.0, original_liquid_volume - total_evaporation)
# 更新vessel字典中的体积
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
@@ -152,14 +222,15 @@ def generate_dry_protocol(
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"]["liquid_volume"] = new_volume
# 🔧 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = new_volume
@@ -167,29 +238,37 @@ def generate_dry_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [new_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = new_volume
debug_print(f"干燥体积变化: {original_liquid_volume:.2f}mL -> {new_volume:.2f}mL (-{total_evaporation:.2f}mL)")
print(f"📊 干燥体积变化计算:")
print(f" - 初始体积: {original_liquid_volume:.2f}mL")
print(f" - 蒸发量: {total_evaporation:.2f}mL")
print(f" - 剩余体积: {new_volume:.2f}mL")
print(f" - 蒸发率: {(total_evaporation/original_liquid_volume*100):.1f}%")
# 3.4 停止加热
print(f" ⏹️ 动作4: 停止加热...")
action_sequence.append({
"device_id": heater_id,
"action_name": "heat_chill_stop",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
"purpose": f"干燥完成,停止加热"
}
})
print(f" ✅ 停止加热命令已添加 🛑")
# 3.5 等待冷却
print(f" ❄️ 动作5: 等待冷却...")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
"time": 10.0,
"time": 10.0, # 等待10秒冷却
"description": f"等待 {compound or '化合物'} 冷却"
}
})
# 最终状态
print(f" ✅ 冷却等待命令已添加 🧊")
# 🔧 新增:干燥完成后的状态报告
final_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -197,37 +276,60 @@ def generate_dry_protocol(
final_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
final_liquid_volume = current_volume
debug_print(f"干燥协议生成完成: {len(action_sequence)} 个动作, 体积 {original_liquid_volume:.2f} -> {final_liquid_volume:.2f}mL")
print(f"\n🎊 DRY: 协议生成完成,共 {len(action_sequence)} 个动作 🎯")
print(f"⏱️ DRY: 预计总时间: {(simulation_time + 30)/60:.0f} 分钟 ⌛")
print(f"📊 干燥结果:")
print(f" - 容器: {vessel_id}")
print(f" - 化合物: {compound or '未指定'}")
print(f" - 干燥前体积: {original_liquid_volume:.2f}mL")
print(f" - 干燥后体积: {final_liquid_volume:.2f}mL")
print(f" - 蒸发体积: {(original_liquid_volume - final_liquid_volume):.2f}mL")
print(f"🏁 所有动作序列准备就绪! ✨")
return action_sequence
# 便捷函数
def generate_quick_dry_protocol(G: nx.DiGraph, vessel: dict, compound: str = "",
# 🔧 新增:便捷函数
def generate_quick_dry_protocol(G: nx.DiGraph, vessel: dict, compound: str = "",
temp: float = 40.0, time: float = 30.0) -> List[Dict[str, Any]]:
"""快速干燥:低温短时间"""
vessel_id = vessel["id"]
print(f"🌡️ 快速干燥: {compound or '化合物'}{vessel_id} @ {temp}°C ({time}min)")
# 临时修改默认参数
import types
temp_func = types.FunctionType(
generate_dry_protocol.__code__,
generate_dry_protocol.__globals__
)
# 直接调用原函数,但修改内部参数
return generate_dry_protocol(G, vessel, compound)
def generate_thorough_dry_protocol(G: nx.DiGraph, vessel: dict, compound: str = "",
def generate_thorough_dry_protocol(G: nx.DiGraph, vessel: dict, compound: str = "",
temp: float = 80.0, time: float = 120.0) -> List[Dict[str, Any]]:
"""深度干燥:高温长时间"""
vessel_id = vessel["id"]
print(f"🔥 深度干燥: {compound or '化合物'}{vessel_id} @ {temp}°C ({time}min)")
return generate_dry_protocol(G, vessel, compound)
def generate_gentle_dry_protocol(G: nx.DiGraph, vessel: dict, compound: str = "",
def generate_gentle_dry_protocol(G: nx.DiGraph, vessel: dict, compound: str = "",
temp: float = 30.0, time: float = 180.0) -> List[Dict[str, Any]]:
"""温和干燥:低温长时间"""
vessel_id = vessel["id"]
print(f"🌡️ 温和干燥: {compound or '化合物'}{vessel_id} @ {temp}°C ({time}min)")
return generate_dry_protocol(G, vessel, compound)
# 测试函数
def test_dry_protocol():
"""测试干燥协议"""
debug_print("=== DRY PROTOCOL 测试 ===")
debug_print("测试完成")
print("=== DRY PROTOCOL 测试 ===")
print("测试完成")
if __name__ == "__main__":
test_dry_protocol()
test_dry_protocol()

View File

@@ -3,14 +3,38 @@ from functools import partial
import networkx as nx
import logging
import uuid
import sys
from typing import List, Dict, Any, Optional
from .utils.vessel_parser import get_vessel, find_connected_stirrer
from .utils.logger_util import debug_print, action_log
from .utils.vessel_parser import get_vessel
from .utils.logger_util import action_log
from .pump_protocol import generate_pump_protocol_with_rinsing, generate_pump_protocol
# 设置日志
logger = logging.getLogger(__name__)
# 确保输出编码为UTF-8
if hasattr(sys.stdout, 'reconfigure'):
try:
sys.stdout.reconfigure(encoding='utf-8')
sys.stderr.reconfigure(encoding='utf-8')
except:
pass
def debug_print(message):
"""调试输出函数 - 支持中文"""
try:
# 确保消息是字符串格式
safe_message = str(message)
logger.info(f"[抽真空充气] {safe_message}")
except UnicodeEncodeError:
# 如果编码失败,尝试替换不支持的字符
safe_message = str(message).encode('utf-8', errors='replace').decode('utf-8')
logger.info(f"[抽真空充气] {safe_message}")
except Exception as e:
# 最后的安全措施
fallback_message = f"日志输出错误: {repr(message)}"
logger.info(f"[抽真空充气] {fallback_message}")
create_action_log = partial(action_log, prefix="[抽真空充气]")
def find_gas_source(G: nx.DiGraph, gas: str) -> str:
@@ -20,9 +44,10 @@ def find_gas_source(G: nx.DiGraph, gas: str) -> str:
2. 气体类型匹配data.gas_type
3. 默认气源
"""
debug_print(f"正在查找气体 '{gas}' 的气源...")
# 通过容器名称匹配
debug_print(f"🔍 正在查找气体 '{gas}' 的气源...")
# 第一步:通过容器名称匹配
debug_print(f"📋 方法1: 容器名称匹配...")
gas_source_patterns = [
f"gas_source_{gas}",
f"gas_{gas}",
@@ -32,178 +57,254 @@ def find_gas_source(G: nx.DiGraph, gas: str) -> str:
f"reagent_bottle_{gas}",
f"bottle_{gas}"
]
debug_print(f"🎯 尝试的容器名称: {gas_source_patterns}")
for pattern in gas_source_patterns:
if pattern in G.nodes():
debug_print(f"通过名称找到气源: {pattern}")
debug_print(f"通过名称找到气源: {pattern}")
return pattern
# 通过气体类型匹配 (data.gas_type)
# 第二步:通过气体类型匹配 (data.gas_type)
debug_print(f"📋 方法2: 气体类型匹配...")
for node_id in G.nodes():
node_data = G.nodes[node_id]
node_class = node_data.get('class', '') or ''
if ('gas_source' in node_class or
'gas' in node_id.lower() or
# 检查是否是气源设备
if ('gas_source' in node_class or
'gas' in node_id.lower() or
node_id.startswith('flask_')):
# 检查 data.gas_type
data = node_data.get('data', {})
gas_type = data.get('gas_type', '')
if gas_type.lower() == gas.lower():
debug_print(f"通过气体类型找到气源: {node_id} (气体类型: {gas_type})")
debug_print(f"通过气体类型找到气源: {node_id} (气体类型: {gas_type})")
return node_id
# 检查 config.gas_type
config = node_data.get('config', {})
config_gas_type = config.get('gas_type', '')
if config_gas_type.lower() == gas.lower():
debug_print(f"通过配置气体类型找到气源: {node_id} (配置气体类型: {config_gas_type})")
debug_print(f"通过配置气体类型找到气源: {node_id} (配置气体类型: {config_gas_type})")
return node_id
# 查找所有可用的气源设备
# 第三步:查找所有可用的气源设备
debug_print(f"📋 方法3: 查找可用气源...")
available_gas_sources = []
for node_id in G.nodes():
node_data = G.nodes[node_id]
node_class = node_data.get('class', '') or ''
if ('gas_source' in node_class or
if ('gas_source' in node_class or
'gas' in node_id.lower() or
(node_id.startswith('flask_') and any(g in node_id.lower() for g in ['air', 'nitrogen', 'argon']))):
data = node_data.get('data', {})
gas_type = data.get('gas_type', '未知')
available_gas_sources.append(f"{node_id} (气体类型: {gas_type})")
# 如果找不到特定气体,使用默认的第一个气源
debug_print(f"📊 可用气源: {available_gas_sources}")
# 第四步:如果找不到特定气体,使用默认的第一个气源
debug_print(f"📋 方法4: 查找默认气源...")
default_gas_sources = [
node for node in G.nodes()
node for node in G.nodes()
if ((G.nodes[node].get('class') or '').find('virtual_gas_source') != -1
or 'gas_source' in node)
]
if default_gas_sources:
default_source = default_gas_sources[0]
debug_print(f"未找到特定气体 '{gas}',使用默认气源: {default_source}")
debug_print(f"⚠️ 未找到特定气体 '{gas}',使用默认气源: {default_source}")
return default_source
debug_print(f"❌ 所有方法都失败了!")
raise ValueError(f"无法找到气体 '{gas}' 的气源。可用气源: {available_gas_sources}")
def find_vacuum_pump(G: nx.DiGraph) -> str:
"""查找真空泵设备"""
debug_print("🔍 正在查找真空泵...")
vacuum_pumps = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if ('virtual_vacuum_pump' in node_class or
'vacuum_pump' in node.lower() or
if ('virtual_vacuum_pump' in node_class or
'vacuum_pump' in node.lower() or
'vacuum' in node_class.lower()):
vacuum_pumps.append(node)
debug_print(f"📋 发现真空泵: {node}")
if not vacuum_pumps:
debug_print(f"❌ 系统中未找到真空泵")
raise ValueError("系统中未找到真空泵")
debug_print(f"使用真空泵: {vacuum_pumps[0]}")
debug_print(f"使用真空泵: {vacuum_pumps[0]}")
return vacuum_pumps[0]
def find_connected_stirrer(G: nx.DiGraph, vessel: str) -> Optional[str]:
"""查找与指定容器相连的搅拌器"""
debug_print(f"🔍 正在查找与容器 {vessel} 连接的搅拌器...")
stirrer_nodes = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if 'virtual_stirrer' in node_class or 'stirrer' in node.lower():
stirrer_nodes.append(node)
debug_print(f"📋 发现搅拌器: {node}")
debug_print(f"📊 找到的搅拌器总数: {len(stirrer_nodes)}")
# 检查哪个搅拌器与目标容器相连
for stirrer in stirrer_nodes:
if G.has_edge(stirrer, vessel) or G.has_edge(vessel, stirrer):
debug_print(f"✅ 找到连接的搅拌器: {stirrer}")
return stirrer
# 如果没有连接的搅拌器,返回第一个可用的
if stirrer_nodes:
debug_print(f"⚠️ 未找到直接连接的搅拌器,使用第一个可用的: {stirrer_nodes[0]}")
return stirrer_nodes[0]
debug_print("❌ 未找到搅拌器")
return None
def find_vacuum_solenoid_valve(G: nx.DiGraph, vacuum_pump: str) -> Optional[str]:
"""查找真空泵相关的电磁阀"""
debug_print(f"🔍 正在查找真空泵 {vacuum_pump} 的电磁阀...")
# 查找所有电磁阀
solenoid_valves = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if ('solenoid' in node_class.lower() or 'solenoid_valve' in node.lower()):
solenoid_valves.append(node)
debug_print(f"📋 发现电磁阀: {node}")
debug_print(f"📊 找到的电磁阀: {solenoid_valves}")
# 检查连接关系
debug_print(f"📋 方法1: 检查连接关系...")
for solenoid in solenoid_valves:
if G.has_edge(solenoid, vacuum_pump) or G.has_edge(vacuum_pump, solenoid):
debug_print(f"找到连接的真空电磁阀: {solenoid}")
debug_print(f"找到连接的真空电磁阀: {solenoid}")
return solenoid
# 通过命名规则查找
debug_print(f"📋 方法2: 检查命名规则...")
for solenoid in solenoid_valves:
if 'vacuum' in solenoid.lower() or solenoid == 'solenoid_valve_1':
debug_print(f"通过命名找到真空电磁阀: {solenoid}")
debug_print(f"通过命名找到真空电磁阀: {solenoid}")
return solenoid
debug_print("未找到真空电磁阀")
debug_print("⚠️ 未找到真空电磁阀")
return None
def find_gas_solenoid_valve(G: nx.DiGraph, gas_source: str) -> Optional[str]:
"""查找气源相关的电磁阀"""
debug_print(f"🔍 正在查找气源 {gas_source} 的电磁阀...")
# 查找所有电磁阀
solenoid_valves = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if ('solenoid' in node_class.lower() or 'solenoid_valve' in node.lower()):
solenoid_valves.append(node)
debug_print(f"📊 找到的电磁阀: {solenoid_valves}")
# 检查连接关系
debug_print(f"📋 方法1: 检查连接关系...")
for solenoid in solenoid_valves:
if G.has_edge(gas_source, solenoid) or G.has_edge(solenoid, gas_source):
debug_print(f"找到连接的气源电磁阀: {solenoid}")
debug_print(f"找到连接的气源电磁阀: {solenoid}")
return solenoid
# 通过命名规则查找
debug_print(f"📋 方法2: 检查命名规则...")
for solenoid in solenoid_valves:
if 'gas' in solenoid.lower() or solenoid == 'solenoid_valve_2':
debug_print(f"通过命名找到气源电磁阀: {solenoid}")
debug_print(f"通过命名找到气源电磁阀: {solenoid}")
return solenoid
debug_print("未找到气源电磁阀")
debug_print("⚠️ 未找到气源电磁阀")
return None
def generate_evacuateandrefill_protocol(
G: nx.DiGraph,
vessel: dict,
vessel: dict, # 🔧 修改:从字符串改为字典类型
gas: str,
**kwargs
) -> List[Dict[str, Any]]:
"""
生成抽真空和充气操作的动作序列
生成抽真空和充气操作的动作序列 - 中文版
Args:
G: 设备图
vessel: 目标容器字典(必需)
gas: 气体名称(必需)
gas: 气体名称(必需)
**kwargs: 其他参数(兼容性)
Returns:
List[Dict[str, Any]]: 动作序列
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
# 硬编码重复次数为 3
repeats = 3
# 生成协议ID
protocol_id = str(uuid.uuid4())
debug_print(f"开始生成抽真空充气协议: vessel={vessel_id}, gas={gas}, repeats={repeats}")
debug_print(f"🆔 生成协议ID: {protocol_id}")
debug_print("=" * 60)
debug_print("🧪 开始生成抽真空充气协议")
debug_print(f"📋 原始参数:")
debug_print(f" 🥼 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 💨 气体: '{gas}'")
debug_print(f" 🔄 循环次数: {repeats} (硬编码)")
debug_print(f" 📦 其他参数: {kwargs}")
debug_print("=" * 60)
action_sequence = []
# === 参数验证和修正 ===
debug_print("🔍 步骤1: 参数验证和修正...")
action_sequence.append(create_action_log(f"开始抽真空充气操作 - 容器: {vessel_id}", "🎬"))
action_sequence.append(create_action_log(f"目标气体: {gas}", "💨"))
action_sequence.append(create_action_log(f"循环次数: {repeats}", "🔄"))
# 验证必需参数
if not vessel_id:
debug_print("❌ 容器参数不能为空")
raise ValueError("容器参数不能为空")
if not gas:
debug_print("❌ 气体参数不能为空")
raise ValueError("气体参数不能为空")
if vessel_id not in G.nodes():
if vessel_id not in G.nodes(): # 🔧 使用 vessel_id
debug_print(f"❌ 容器 '{vessel_id}' 在系统中不存在")
raise ValueError(f"容器 '{vessel_id}' 在系统中不存在")
debug_print("✅ 基本参数验证通过")
action_sequence.append(create_action_log("参数验证通过", ""))
# 标准化气体名称
debug_print("🔧 标准化气体名称...")
gas_aliases = {
'n2': 'nitrogen',
'ar': 'argon',
@@ -218,54 +319,61 @@ def generate_evacuateandrefill_protocol(
'二氧化碳': 'carbon_dioxide',
'氢气': 'hydrogen'
}
original_gas = gas
gas_lower = gas.lower().strip()
if gas_lower in gas_aliases:
gas = gas_aliases[gas_lower]
debug_print(f"标准化气体名称: {original_gas} -> {gas}")
debug_print(f"🔄 标准化气体名称: {original_gas} -> {gas}")
action_sequence.append(create_action_log(f"气体名称标准化: {original_gas} -> {gas}", "🔄"))
debug_print(f"最终参数: 容器={vessel_id}, 气体={gas}, 重复={repeats}")
debug_print(f"📋 最终参数: 容器={vessel_id}, 气体={gas}, 重复={repeats}")
# === 查找设备 ===
debug_print("🔍 步骤2: 查找设备...")
action_sequence.append(create_action_log("正在查找相关设备...", "🔍"))
try:
vacuum_pump = find_vacuum_pump(G)
action_sequence.append(create_action_log(f"找到真空泵: {vacuum_pump}", "🌪️"))
gas_source = find_gas_source(G, gas)
action_sequence.append(create_action_log(f"找到气源: {gas_source}", "💨"))
vacuum_solenoid = find_vacuum_solenoid_valve(G, vacuum_pump)
if vacuum_solenoid:
action_sequence.append(create_action_log(f"找到真空电磁阀: {vacuum_solenoid}", "🚪"))
else:
action_sequence.append(create_action_log("未找到真空电磁阀", "⚠️"))
gas_solenoid = find_gas_solenoid_valve(G, gas_source)
if gas_solenoid:
action_sequence.append(create_action_log(f"找到气源电磁阀: {gas_solenoid}", "🚪"))
else:
action_sequence.append(create_action_log("未找到气源电磁阀", "⚠️"))
stirrer_id = find_connected_stirrer(G, vessel_id)
stirrer_id = find_connected_stirrer(G, vessel_id) # 🔧 使用 vessel_id
if stirrer_id:
action_sequence.append(create_action_log(f"找到搅拌器: {stirrer_id}", "🌪️"))
else:
action_sequence.append(create_action_log("未找到搅拌器", "⚠️"))
debug_print(f"设备配置: 真空泵={vacuum_pump}, 气源={gas_source}, 搅拌器={stirrer_id}")
debug_print(f"📊 设备配置:")
debug_print(f" 🌪️ 真空泵: {vacuum_pump}")
debug_print(f" 💨 气源: {gas_source}")
debug_print(f" 🚪 真空电磁阀: {vacuum_solenoid}")
debug_print(f" 🚪 气源电磁阀: {gas_solenoid}")
debug_print(f" 🌪️ 搅拌器: {stirrer_id}")
except Exception as e:
debug_print(f"设备查找失败: {str(e)}")
debug_print(f"设备查找失败: {str(e)}")
action_sequence.append(create_action_log(f"设备查找失败: {str(e)}", ""))
raise ValueError(f"设备查找失败: {str(e)}")
# === 参数设置 ===
debug_print("🔍 步骤3: 参数设置...")
action_sequence.append(create_action_log("设置操作参数...", "⚙️"))
# 根据气体类型调整参数
if gas.lower() in ['nitrogen', 'argon']:
VACUUM_VOLUME = 25.0
@@ -273,6 +381,7 @@ def generate_evacuateandrefill_protocol(
PUMP_FLOW_RATE = 2.0
VACUUM_TIME = 30.0
REFILL_TIME = 20.0
debug_print("💨 惰性气体: 使用标准参数")
action_sequence.append(create_action_log("检测到惰性气体,使用标准参数", "💨"))
elif gas.lower() in ['air', 'oxygen']:
VACUUM_VOLUME = 20.0
@@ -280,6 +389,7 @@ def generate_evacuateandrefill_protocol(
PUMP_FLOW_RATE = 1.5
VACUUM_TIME = 45.0
REFILL_TIME = 25.0
debug_print("🔥 活性气体: 使用保守参数")
action_sequence.append(create_action_log("检测到活性气体,使用保守参数", "🔥"))
else:
VACUUM_VOLUME = 15.0
@@ -287,88 +397,116 @@ def generate_evacuateandrefill_protocol(
PUMP_FLOW_RATE = 1.0
VACUUM_TIME = 60.0
REFILL_TIME = 30.0
debug_print("❓ 未知气体: 使用安全参数")
action_sequence.append(create_action_log("未知气体类型,使用安全参数", ""))
STIR_SPEED = 200.0
debug_print(f"⚙️ 操作参数:")
debug_print(f" 📏 真空体积: {VACUUM_VOLUME}mL")
debug_print(f" 📏 充气体积: {REFILL_VOLUME}mL")
debug_print(f" ⚡ 泵流速: {PUMP_FLOW_RATE}mL/s")
debug_print(f" ⏱️ 真空时间: {VACUUM_TIME}s")
debug_print(f" ⏱️ 充气时间: {REFILL_TIME}s")
debug_print(f" 🌪️ 搅拌速度: {STIR_SPEED}RPM")
action_sequence.append(create_action_log(f"真空体积: {VACUUM_VOLUME}mL", "📏"))
action_sequence.append(create_action_log(f"充气体积: {REFILL_VOLUME}mL", "📏"))
action_sequence.append(create_action_log(f"泵流速: {PUMP_FLOW_RATE}mL/s", ""))
# === 路径验证 ===
debug_print("🔍 步骤4: 路径验证...")
action_sequence.append(create_action_log("验证传输路径...", "🛤️"))
try:
if nx.has_path(G, vessel_id, vacuum_pump):
# 验证抽真空路径
if nx.has_path(G, vessel_id, vacuum_pump): # 🔧 使用 vessel_id
vacuum_path = nx.shortest_path(G, source=vessel_id, target=vacuum_pump)
debug_print(f"✅ 真空路径: {' -> '.join(vacuum_path)}")
action_sequence.append(create_action_log(f"真空路径: {' -> '.join(vacuum_path)}", "🛤️"))
else:
debug_print(f"⚠️ 真空路径不存在,继续执行但可能有问题")
action_sequence.append(create_action_log("真空路径检查: 路径不存在", "⚠️"))
if nx.has_path(G, gas_source, vessel_id):
# 验证充气路径
if nx.has_path(G, gas_source, vessel_id): # 🔧 使用 vessel_id
gas_path = nx.shortest_path(G, source=gas_source, target=vessel_id)
debug_print(f"✅ 气体路径: {' -> '.join(gas_path)}")
action_sequence.append(create_action_log(f"气体路径: {' -> '.join(gas_path)}", "🛤️"))
else:
debug_print(f"⚠️ 气体路径不存在,继续执行但可能有问题")
action_sequence.append(create_action_log("气体路径检查: 路径不存在", "⚠️"))
except Exception as e:
debug_print(f"⚠️ 路径验证失败: {str(e)},继续执行")
action_sequence.append(create_action_log(f"路径验证失败: {str(e)}", "⚠️"))
# === 启动搅拌器 ===
debug_print("🔍 步骤5: 启动搅拌器...")
if stirrer_id:
debug_print(f"🌪️ 启动搅拌器: {stirrer_id}")
action_sequence.append(create_action_log(f"启动搅拌器 {stirrer_id} (速度: {STIR_SPEED}rpm)", "🌪️"))
action_sequence.append({
"device_id": stirrer_id,
"action_name": "start_stir",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
"stir_speed": STIR_SPEED,
"purpose": "抽真空充气前预搅拌"
}
})
# 等待搅拌稳定
action_sequence.append(create_action_log("等待搅拌稳定...", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 5.0}
})
else:
debug_print("⚠️ 未找到搅拌器,跳过搅拌器启动")
action_sequence.append(create_action_log("跳过搅拌器启动", "⏭️"))
# === 执行循环 ===
debug_print("🔍 步骤6: 执行抽真空-充气循环...")
action_sequence.append(create_action_log(f"开始 {repeats} 次抽真空-充气循环", "🔄"))
for cycle in range(repeats):
debug_print(f"=== 第 {cycle+1}/{repeats} 轮循环 ===")
action_sequence.append(create_action_log(f"{cycle+1}/{repeats} 轮循环开始", "🚀"))
# ============ 抽真空阶段 ============
debug_print(f"🌪️ 抽真空阶段开始")
action_sequence.append(create_action_log("开始抽真空阶段", "🌪️"))
# 启动真空泵
debug_print(f"🔛 启动真空泵: {vacuum_pump}")
action_sequence.append(create_action_log(f"启动真空泵: {vacuum_pump}", "🔛"))
action_sequence.append({
"device_id": vacuum_pump,
"action_name": "set_status",
"action_kwargs": {"string": "ON"}
})
# 开启真空电磁阀
if vacuum_solenoid:
debug_print(f"🚪 打开真空电磁阀: {vacuum_solenoid}")
action_sequence.append(create_action_log(f"打开真空电磁阀: {vacuum_solenoid}", "🚪"))
action_sequence.append({
"device_id": vacuum_solenoid,
"action_name": "set_valve_position",
"action_kwargs": {"command": "OPEN"}
})
# 抽真空操作
debug_print(f"🌪️ 抽真空操作: {vessel_id} -> {vacuum_pump}")
action_sequence.append(create_action_log(f"开始抽真空: {vessel_id} -> {vacuum_pump}", "🌪️"))
try:
vacuum_transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=vessel_id,
from_vessel=vessel_id, # 🔧 使用 vessel_id
to_vessel=vacuum_pump,
volume=VACUUM_VOLUME,
amount="",
@@ -381,25 +519,27 @@ def generate_evacuateandrefill_protocol(
flowrate=PUMP_FLOW_RATE,
transfer_flowrate=PUMP_FLOW_RATE
)
if vacuum_transfer_actions:
action_sequence.extend(vacuum_transfer_actions)
debug_print(f"✅ 添加了 {len(vacuum_transfer_actions)} 个抽真空动作")
action_sequence.append(create_action_log(f"抽真空协议完成 ({len(vacuum_transfer_actions)} 个操作)", ""))
else:
debug_print("⚠️ 抽真空协议返回空序列,添加手动动作")
action_sequence.append(create_action_log("抽真空协议为空,使用手动等待", "⚠️"))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": VACUUM_TIME}
})
except Exception as e:
debug_print(f"抽真空失败: {str(e)}")
debug_print(f"抽真空失败: {str(e)}")
action_sequence.append(create_action_log(f"抽真空失败: {str(e)}", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": VACUUM_TIME}
})
# 抽真空后等待
wait_minutes = VACUUM_TIME / 60
action_sequence.append(create_action_log(f"抽真空后等待 ({wait_minutes:.1f} 分钟)", ""))
@@ -407,59 +547,65 @@ def generate_evacuateandrefill_protocol(
"action_name": "wait",
"action_kwargs": {"time": VACUUM_TIME}
})
# 关闭真空电磁阀
if vacuum_solenoid:
debug_print(f"🚪 关闭真空电磁阀: {vacuum_solenoid}")
action_sequence.append(create_action_log(f"关闭真空电磁阀: {vacuum_solenoid}", "🚪"))
action_sequence.append({
"device_id": vacuum_solenoid,
"action_name": "set_valve_position",
"action_kwargs": {"command": "CLOSED"}
})
# 关闭真空泵
debug_print(f"🔴 停止真空泵: {vacuum_pump}")
action_sequence.append(create_action_log(f"停止真空泵: {vacuum_pump}", "🔴"))
action_sequence.append({
"device_id": vacuum_pump,
"action_name": "set_status",
"action_kwargs": {"string": "OFF"}
})
# 阶段间等待
action_sequence.append(create_action_log("抽真空阶段完成,短暂等待", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 5.0}
})
# ============ 充气阶段 ============
debug_print(f"💨 充气阶段开始")
action_sequence.append(create_action_log("开始气体充气阶段", "💨"))
# 启动气源
debug_print(f"🔛 启动气源: {gas_source}")
action_sequence.append(create_action_log(f"启动气源: {gas_source}", "🔛"))
action_sequence.append({
"device_id": gas_source,
"action_name": "set_status",
"action_kwargs": {"string": "ON"}
})
# 开启气源电磁阀
if gas_solenoid:
debug_print(f"🚪 打开气源电磁阀: {gas_solenoid}")
action_sequence.append(create_action_log(f"打开气源电磁阀: {gas_solenoid}", "🚪"))
action_sequence.append({
"device_id": gas_solenoid,
"action_name": "set_valve_position",
"action_kwargs": {"command": "OPEN"}
})
# 充气操作
debug_print(f"💨 充气操作: {gas_source} -> {vessel_id}")
action_sequence.append(create_action_log(f"开始气体充气: {gas_source} -> {vessel_id}", "💨"))
try:
gas_transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=gas_source,
to_vessel=vessel_id,
to_vessel=vessel_id, # 🔧 使用 vessel_id
volume=REFILL_VOLUME,
amount="",
time=0.0,
@@ -471,25 +617,27 @@ def generate_evacuateandrefill_protocol(
flowrate=PUMP_FLOW_RATE,
transfer_flowrate=PUMP_FLOW_RATE
)
if gas_transfer_actions:
action_sequence.extend(gas_transfer_actions)
debug_print(f"✅ 添加了 {len(gas_transfer_actions)} 个充气动作")
action_sequence.append(create_action_log(f"气体充气协议完成 ({len(gas_transfer_actions)} 个操作)", ""))
else:
debug_print("⚠️ 充气协议返回空序列,添加手动动作")
action_sequence.append(create_action_log("充气协议为空,使用手动等待", "⚠️"))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": REFILL_TIME}
})
except Exception as e:
debug_print(f"气体充气失败: {str(e)}")
debug_print(f"气体充气失败: {str(e)}")
action_sequence.append(create_action_log(f"气体充气失败: {str(e)}", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": REFILL_TIME}
})
# 充气后等待
refill_wait_minutes = REFILL_TIME / 60
action_sequence.append(create_action_log(f"充气后等待 ({refill_wait_minutes:.1f} 分钟)", ""))
@@ -497,26 +645,29 @@ def generate_evacuateandrefill_protocol(
"action_name": "wait",
"action_kwargs": {"time": REFILL_TIME}
})
# 关闭气源电磁阀
if gas_solenoid:
debug_print(f"🚪 关闭气源电磁阀: {gas_solenoid}")
action_sequence.append(create_action_log(f"关闭气源电磁阀: {gas_solenoid}", "🚪"))
action_sequence.append({
"device_id": gas_solenoid,
"action_name": "set_valve_position",
"action_kwargs": {"command": "CLOSED"}
})
# 关闭气源
debug_print(f"🔴 停止气源: {gas_source}")
action_sequence.append(create_action_log(f"停止气源: {gas_source}", "🔴"))
action_sequence.append({
"device_id": gas_source,
"action_name": "set_status",
"action_kwargs": {"string": "OFF"}
})
# 循环间等待
if cycle < repeats - 1:
debug_print(f"⏳ 等待下一个循环...")
action_sequence.append(create_action_log("等待下一个循环...", ""))
action_sequence.append({
"action_name": "wait",
@@ -524,58 +675,78 @@ def generate_evacuateandrefill_protocol(
})
else:
action_sequence.append(create_action_log(f"{cycle+1}/{repeats} 轮循环完成", ""))
# === 停止搅拌器 ===
debug_print("🔍 步骤7: 停止搅拌器...")
if stirrer_id:
debug_print(f"🛑 停止搅拌器: {stirrer_id}")
action_sequence.append(create_action_log(f"停止搅拌器: {stirrer_id}", "🛑"))
action_sequence.append({
"device_id": stirrer_id,
"action_name": "stop_stir",
"action_kwargs": {"vessel": {"id": vessel_id},}
"action_kwargs": {"vessel": {"id": vessel_id},} # 🔧 使用 vessel_id
})
else:
action_sequence.append(create_action_log("跳过搅拌器停止", "⏭️"))
# === 最终等待 ===
action_sequence.append(create_action_log("最终稳定等待...", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 10.0}
})
# === 总结 ===
total_time = (VACUUM_TIME + REFILL_TIME + 25) * repeats + 20
debug_print(f"抽真空充气协议生成完成: {len(action_sequence)} 个动作, 预计 {total_time:.0f}s")
debug_print("=" * 60)
debug_print(f"🎉 抽真空充气协议生成完成")
debug_print(f"📊 协议统计:")
debug_print(f" 📋 总动作数: {len(action_sequence)}")
debug_print(f" ⏱️ 预计总时间: {total_time:.0f}s ({total_time/60:.1f} 分钟)")
debug_print(f" 🥼 处理容器: {vessel_id}")
debug_print(f" 💨 使用气体: {gas}")
debug_print(f" 🔄 重复次数: {repeats}")
debug_print("=" * 60)
# 添加完成日志
summary_msg = f"抽真空充气协议完成: {vessel_id} (使用 {gas}{repeats} 次循环)"
action_sequence.append(create_action_log(summary_msg, "🎉"))
return action_sequence
# === 便捷函数 ===
def generate_nitrogen_purge_protocol(G: nx.DiGraph, vessel: dict, **kwargs) -> List[Dict[str, Any]]:
def generate_nitrogen_purge_protocol(G: nx.DiGraph, vessel: dict, **kwargs) -> List[Dict[str, Any]]: # 🔧 修改参数类型
"""生成氮气置换协议"""
vessel_id = vessel["id"]
debug_print(f"💨 生成氮气置换协议: {vessel_id}")
return generate_evacuateandrefill_protocol(G, vessel, "nitrogen", **kwargs)
def generate_argon_purge_protocol(G: nx.DiGraph, vessel: dict, **kwargs) -> List[Dict[str, Any]]:
def generate_argon_purge_protocol(G: nx.DiGraph, vessel: dict, **kwargs) -> List[Dict[str, Any]]: # 🔧 修改参数类型
"""生成氩气置换协议"""
vessel_id = vessel["id"]
debug_print(f"💨 生成氩气置换协议: {vessel_id}")
return generate_evacuateandrefill_protocol(G, vessel, "argon", **kwargs)
def generate_air_purge_protocol(G: nx.DiGraph, vessel: dict, **kwargs) -> List[Dict[str, Any]]:
def generate_air_purge_protocol(G: nx.DiGraph, vessel: dict, **kwargs) -> List[Dict[str, Any]]: # 🔧 修改参数类型
"""生成空气置换协议"""
vessel_id = vessel["id"]
debug_print(f"💨 生成空气置换协议: {vessel_id}")
return generate_evacuateandrefill_protocol(G, vessel, "air", **kwargs)
def generate_inert_atmosphere_protocol(G: nx.DiGraph, vessel: dict, gas: str = "nitrogen", **kwargs) -> List[Dict[str, Any]]:
def generate_inert_atmosphere_protocol(G: nx.DiGraph, vessel: dict, gas: str = "nitrogen", **kwargs) -> List[Dict[str, Any]]: # 🔧 修改参数类型
"""生成惰性气氛协议"""
vessel_id = vessel["id"]
debug_print(f"🛡️ 生成惰性气氛协议: {vessel_id} (使用 {gas})")
return generate_evacuateandrefill_protocol(G, vessel, gas, **kwargs)
# 测试函数
def test_evacuateandrefill_protocol():
"""测试抽真空充气协议"""
debug_print("=== 抽真空充气协议测试 ===")
debug_print("测试完成")
debug_print("=== 抽真空充气协议增强中文版测试 ===")
debug_print("测试完成")
if __name__ == "__main__":
test_evacuateandrefill_protocol()
test_evacuateandrefill_protocol()

View File

@@ -0,0 +1,143 @@
# import numpy as np
# import networkx as nx
# def generate_evacuateandrefill_protocol(
# G: nx.DiGraph,
# vessel: str,
# gas: str,
# repeats: int = 1
# ) -> list[dict]:
# """
# 生成泵操作的动作序列。
# :param G: 有向图, 节点为容器和注射泵, 边为流体管道, A→B边的属性为管道接A端的阀门位置
# :param from_vessel: 容器A
# :param to_vessel: 容器B
# :param volume: 转移的体积
# :param flowrate: 最终注入容器B时的流速
# :param transfer_flowrate: 泵骨架中转移流速(若不指定,默认与注入流速相同)
# :return: 泵操作的动作序列
# """
# # 生成电磁阀、真空泵、气源操作的动作序列
# vacuum_action_sequence = []
# nodes = G.nodes(data=True)
# # 找到和 vessel 相连的电磁阀和真空泵、气源
# vacuum_backbone = {"vessel": vessel}
# for neighbor in G.neighbors(vessel):
# if nodes[neighbor]["class"].startswith("solenoid_valve"):
# for neighbor2 in G.neighbors(neighbor):
# if neighbor2 == vessel:
# continue
# if nodes[neighbor2]["class"].startswith("vacuum_pump"):
# vacuum_backbone.update({"vacuum_valve": neighbor, "pump": neighbor2})
# break
# elif nodes[neighbor2]["class"].startswith("gas_source"):
# vacuum_backbone.update({"gas_valve": neighbor, "gas": neighbor2})
# break
# # 判断是否设备齐全
# if len(vacuum_backbone) < 5:
# print(f"\n\n\n{vacuum_backbone}\n\n\n")
# raise ValueError("Not all devices are connected to the vessel.")
# # 生成操作的动作序列
# for i in range(repeats):
# # 打开真空泵阀门、关闭气源阀门
# vacuum_action_sequence.append([
# {
# "device_id": vacuum_backbone["vacuum_valve"],
# "action_name": "set_valve_position",
# "action_kwargs": {
# "command": "OPEN"
# }
# },
# {
# "device_id": vacuum_backbone["gas_valve"],
# "action_name": "set_valve_position",
# "action_kwargs": {
# "command": "CLOSED"
# }
# }
# ])
# # 打开真空泵、关闭气源
# vacuum_action_sequence.append([
# {
# "device_id": vacuum_backbone["pump"],
# "action_name": "set_status",
# "action_kwargs": {
# "string": "ON"
# }
# },
# {
# "device_id": vacuum_backbone["gas"],
# "action_name": "set_status",
# "action_kwargs": {
# "string": "OFF"
# }
# }
# ])
# vacuum_action_sequence.append({"action_name": "wait", "action_kwargs": {"time": 60}})
# # 关闭真空泵阀门、打开气源阀门
# vacuum_action_sequence.append([
# {
# "device_id": vacuum_backbone["vacuum_valve"],
# "action_name": "set_valve_position",
# "action_kwargs": {
# "command": "CLOSED"
# }
# },
# {
# "device_id": vacuum_backbone["gas_valve"],
# "action_name": "set_valve_position",
# "action_kwargs": {
# "command": "OPEN"
# }
# }
# ])
# # 关闭真空泵、打开气源
# vacuum_action_sequence.append([
# {
# "device_id": vacuum_backbone["pump"],
# "action_name": "set_status",
# "action_kwargs": {
# "string": "OFF"
# }
# },
# {
# "device_id": vacuum_backbone["gas"],
# "action_name": "set_status",
# "action_kwargs": {
# "string": "ON"
# }
# }
# ])
# vacuum_action_sequence.append({"action_name": "wait", "action_kwargs": {"time": 60}})
# # 关闭气源
# vacuum_action_sequence.append(
# {
# "device_id": vacuum_backbone["gas"],
# "action_name": "set_status",
# "action_kwargs": {
# "string": "OFF"
# }
# }
# )
# # 关闭阀门
# vacuum_action_sequence.append(
# {
# "device_id": vacuum_backbone["gas_valve"],
# "action_name": "set_valve_position",
# "action_kwargs": {
# "command": "CLOSED"
# }
# }
# )
# return vacuum_action_sequence

View File

@@ -4,99 +4,128 @@ import logging
import re
from .utils.vessel_parser import get_vessel
from .utils.unit_parser import parse_time_input
from .utils.logger_util import debug_print
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[EVAPORATE] {message}")
def find_rotavap_device(G: nx.DiGraph, vessel: str = None) -> Optional[str]:
"""
在组态图中查找旋转蒸发仪设备
Args:
G: 设备图
vessel: 指定的设备名称(可选)
Returns:
str: 找到的旋转蒸发仪设备ID如果没找到返回None
"""
debug_print("🔍 开始查找旋转蒸发仪设备... 🌪️")
# 如果指定了vessel先检查是否存在且是旋转蒸发仪
if vessel:
debug_print(f"🎯 检查指定设备: {vessel} 🔧")
if vessel in G.nodes():
node_data = G.nodes[vessel]
node_class = node_data.get('class', '')
node_type = node_data.get('type', '')
debug_print(f"📋 设备信息 {vessel}: class={node_class}, type={node_type}")
# 检查是否为旋转蒸发仪
if any(keyword in str(node_class).lower() for keyword in ['rotavap', 'rotary', 'evaporat']):
debug_print(f"找到指定的旋转蒸发仪: {vessel}")
debug_print(f"🎉 找到指定的旋转蒸发仪: {vessel}")
return vessel
elif node_type == 'device':
debug_print(f"指定设备存在,尝试直接使用: {vessel}")
debug_print(f"指定设备存在,尝试直接使用: {vessel} 🔧")
return vessel
else:
debug_print(f"❌ 指定的设备 {vessel} 不存在 😞")
# 在所有设备中查找旋转蒸发仪
debug_print("🔎 在所有设备中搜索旋转蒸发仪... 🕵️‍♀️")
rotavap_candidates = []
for node_id, node_data in G.nodes(data=True):
node_class = node_data.get('class', '')
node_type = node_data.get('type', '')
# 跳过非设备节点
if node_type != 'device':
continue
# 检查设备类型
if any(keyword in str(node_class).lower() for keyword in ['rotavap', 'rotary', 'evaporat']):
rotavap_candidates.append(node_id)
debug_print(f"🌟 找到旋转蒸发仪候选: {node_id} (class: {node_class}) 🌪️")
elif any(keyword in str(node_id).lower() for keyword in ['rotavap', 'rotary', 'evaporat']):
rotavap_candidates.append(node_id)
debug_print(f"🌟 找到旋转蒸发仪候选 (按名称): {node_id} 🌪️")
if rotavap_candidates:
selected = rotavap_candidates[0]
debug_print(f"选择旋转蒸发仪: {selected}")
selected = rotavap_candidates[0] # 选择第一个找到的
debug_print(f"🎯 选择旋转蒸发仪: {selected} 🏆")
return selected
debug_print("未找到旋转蒸发仪设备")
debug_print("😭 未找到旋转蒸发仪设备 💔")
return None
def find_connected_vessel(G: nx.DiGraph, rotavap_device: str) -> Optional[str]:
"""
查找与旋转蒸发仪连接的容器
Args:
G: 设备图
rotavap_device: 旋转蒸发仪设备ID
Returns:
str: 连接的容器ID如果没找到返回None
"""
debug_print(f"🔗 查找与 {rotavap_device} 连接的容器... 🥽")
# 查看旋转蒸发仪的子设备
rotavap_data = G.nodes[rotavap_device]
children = rotavap_data.get('children', [])
debug_print(f"👶 检查子设备: {children}")
for child_id in children:
if child_id in G.nodes():
child_data = G.nodes[child_id]
child_type = child_data.get('type', '')
if child_type == 'container':
debug_print(f"找到连接的容器: {child_id}")
debug_print(f"🎉 找到连接的容器: {child_id} 🥽✨")
return child_id
# 查看邻接的容器
debug_print("🤝 检查邻接设备...")
for neighbor in G.neighbors(rotavap_device):
neighbor_data = G.nodes[neighbor]
neighbor_type = neighbor_data.get('type', '')
if neighbor_type == 'container':
debug_print(f"找到邻接的容器: {neighbor}")
debug_print(f"🎉 找到邻接的容器: {neighbor} 🥽✨")
return neighbor
debug_print("未找到连接的容器")
debug_print("😞 未找到连接的容器 💔")
return None
def generate_evaporate_protocol(
G: nx.DiGraph,
vessel: dict,
vessel: dict, # 🔧 修改:从字符串改为字典类型
pressure: float = 0.1,
temp: float = 60.0,
time: Union[str, float] = "180",
time: Union[str, float] = "180", # 🔧 修改:支持字符串时间
stir_speed: float = 100.0,
solvent: str = "",
**kwargs
) -> List[Dict[str, Any]]:
"""
生成蒸发操作的协议序列 - 支持单位和体积运算
Args:
G: 设备图
vessel: 容器字典从XDL传入
@@ -106,16 +135,27 @@ def generate_evaporate_protocol(
stir_speed: 旋转速度 (RPM)默认100
solvent: 溶剂名称(用于参数优化)
**kwargs: 其他参数(兼容性)
Returns:
List[Dict[str, Any]]: 动作序列
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
debug_print(f"开始生成蒸发协议: vessel={vessel_id}, pressure={pressure}, temp={temp}, time={time}")
# 记录蒸发前的容器状态
debug_print("🌟" * 20)
debug_print("🌪️ 开始生成蒸发协议(支持单位和体积运算)✨")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 💨 pressure: {pressure} bar")
debug_print(f" 🌡️ temp: {temp}°C")
debug_print(f" ⏰ time: {time} (类型: {type(time)})")
debug_print(f" 🌪️ stir_speed: {stir_speed} RPM")
debug_print(f" 🧪 solvent: '{solvent}'")
debug_print("🌟" * 20)
# 🔧 新增:记录蒸发前的容器状态
debug_print("🔍 记录蒸发前容器状态...")
original_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -123,97 +163,168 @@ def generate_evaporate_protocol(
original_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
original_liquid_volume = current_volume
# 查找旋转蒸发仪设备
debug_print(f"📊 蒸发前液体体积: {original_liquid_volume:.2f}mL")
# === 步骤1: 查找旋转蒸发仪设备 ===
debug_print("📍 步骤1: 查找旋转蒸发仪设备... 🔍")
# 验证vessel参数
if not vessel_id:
debug_print("❌ vessel 参数不能为空! 😱")
raise ValueError("vessel 参数不能为空")
# 查找旋转蒸发仪设备
rotavap_device = find_rotavap_device(G, vessel_id)
if not rotavap_device:
debug_print("💥 未找到旋转蒸发仪设备! 😭")
raise ValueError(f"未找到旋转蒸发仪设备。请检查组态图中是否包含 class 包含 'rotavap''rotary''evaporat' 的设备")
# 确定目标容器
debug_print(f"🎉 成功找到旋转蒸发仪: {rotavap_device}")
# === 步骤2: 确定目标容器 ===
debug_print("📍 步骤2: 确定目标容器... 🥽")
target_vessel = vessel_id
# 如果vessel就是旋转蒸发仪设备查找连接的容器
if vessel_id == rotavap_device:
debug_print("🔄 vessel就是旋转蒸发仪查找连接的容器...")
connected_vessel = find_connected_vessel(G, rotavap_device)
if connected_vessel:
target_vessel = connected_vessel
debug_print(f"✅ 使用连接的容器: {target_vessel} 🥽✨")
else:
debug_print(f"⚠️ 未找到连接的容器,使用设备本身: {rotavap_device} 🔧")
target_vessel = rotavap_device
elif vessel_id in G.nodes() and G.nodes[vessel_id].get('type') == 'container':
debug_print(f"✅ 使用指定的容器: {vessel_id} 🥽✨")
target_vessel = vessel_id
else:
debug_print(f"⚠️ 容器 '{vessel_id}' 不存在或类型不正确,使用旋转蒸发仪设备: {rotavap_device} 🔧")
target_vessel = rotavap_device
# 单位解析处理
# === 🔧 新增步骤3单位解析处理 ===
debug_print("📍 步骤3: 单位解析处理... ⚡")
# 解析时间
final_time = parse_time_input(time)
debug_print(f"时间解析: {time} -> {final_time}s ({final_time/60:.1f}分钟)")
# 参数验证和修正
debug_print(f"🎯 时间解析完成: {time} {final_time}s ({final_time/60:.1f}分钟) ⏰✨")
# === 步骤4: 参数验证和修正 ===
debug_print("📍 步骤4: 参数验证和修正... 🔧")
# 修正参数范围
if pressure <= 0 or pressure > 1.0:
debug_print(f"⚠️ 真空度 {pressure} bar 超出范围,修正为 0.1 bar 💨")
pressure = 0.1
else:
debug_print(f"✅ 真空度 {pressure} bar 在正常范围内 💨")
if temp < 10.0 or temp > 200.0:
debug_print(f"⚠️ 温度 {temp}°C 超出范围,修正为 60°C 🌡️")
temp = 60.0
else:
debug_print(f"✅ 温度 {temp}°C 在正常范围内 🌡️")
if final_time <= 0:
debug_print(f"⚠️ 时间 {final_time}s 无效,修正为 180s (3分钟) ⏰")
final_time = 180.0
else:
debug_print(f"✅ 时间 {final_time}s ({final_time/60:.1f}分钟) 有效 ⏰")
if stir_speed < 10.0 or stir_speed > 300.0:
debug_print(f"⚠️ 旋转速度 {stir_speed} RPM 超出范围,修正为 100 RPM 🌪️")
stir_speed = 100.0
else:
debug_print(f"✅ 旋转速度 {stir_speed} RPM 在正常范围内 🌪️")
# 根据溶剂优化参数
if solvent:
debug_print(f"🧪 根据溶剂 '{solvent}' 优化参数... 🔬")
solvent_lower = solvent.lower()
if any(s in solvent_lower for s in ['water', 'aqueous', 'h2o']):
temp = max(temp, 80.0)
pressure = max(pressure, 0.2)
debug_print("💧 水系溶剂:提高温度和真空度 🌡️💨")
elif any(s in solvent_lower for s in ['ethanol', 'methanol', 'acetone']):
temp = min(temp, 50.0)
pressure = min(pressure, 0.05)
debug_print("🍺 易挥发溶剂:降低温度和真空度 🌡️💨")
elif any(s in solvent_lower for s in ['dmso', 'dmi', 'toluene']):
temp = max(temp, 100.0)
pressure = min(pressure, 0.01)
debug_print(f"最终参数: pressure={pressure}bar, temp={temp}°C, time={final_time}s, stir_speed={stir_speed}RPM")
# 蒸发体积计算
debug_print("🔥 高沸点溶剂:提高温度,降低真空度 🌡️💨")
else:
debug_print("🧪 通用溶剂,使用标准参数 ✨")
else:
debug_print("🤷‍♀️ 未指定溶剂,使用默认参数 ✨")
debug_print(f"🎯 最终参数: pressure={pressure} bar 💨, temp={temp}°C 🌡️, time={final_time}s ⏰, stir_speed={stir_speed} RPM 🌪️")
# === 🔧 新增步骤5蒸发体积计算 ===
debug_print("📍 步骤5: 蒸发体积计算... 📊")
# 根据温度、真空度、时间和溶剂类型估算蒸发量
evaporation_volume = 0.0
if original_liquid_volume > 0:
base_evap_rate = 0.5
# 基础蒸发速率mL/min
base_evap_rate = 0.5 # 基础速率
# 温度系数(高温蒸发更快)
temp_factor = 1.0 + (temp - 25.0) / 100.0
# 真空系数(真空度越高蒸发越快)
vacuum_factor = 1.0 + (1.0 - pressure) * 2.0
# 溶剂系数
solvent_factor = 1.0
if solvent:
solvent_lower = solvent.lower()
if any(s in solvent_lower for s in ['water', 'h2o']):
solvent_factor = 0.8
solvent_factor = 0.8 # 水蒸发较慢
elif any(s in solvent_lower for s in ['ethanol', 'methanol', 'acetone']):
solvent_factor = 1.5
solvent_factor = 1.5 # 易挥发溶剂蒸发快
elif any(s in solvent_lower for s in ['dmso', 'dmi']):
solvent_factor = 0.3
solvent_factor = 0.3 # 高沸点溶剂蒸发慢
# 计算总蒸发量
total_evap_rate = base_evap_rate * temp_factor * vacuum_factor * solvent_factor
evaporation_volume = min(
original_liquid_volume * 0.95,
total_evap_rate * (final_time / 60.0)
original_liquid_volume * 0.95, # 最多蒸发95%
total_evap_rate * (final_time / 60.0) # 时间相关的蒸发量
)
debug_print(f"预计蒸发量: {evaporation_volume:.2f}mL ({evaporation_volume/original_liquid_volume*100:.1f}%)")
# 生成动作序列
debug_print(f"📊 蒸发量计算:")
debug_print(f" - 基础蒸发速率: {base_evap_rate} mL/min")
debug_print(f" - 温度系数: {temp_factor:.2f} (基于 {temp}°C)")
debug_print(f" - 真空系数: {vacuum_factor:.2f} (基于 {pressure} bar)")
debug_print(f" - 溶剂系数: {solvent_factor:.2f} ({solvent or '通用'})")
debug_print(f" - 总蒸发速率: {total_evap_rate:.2f} mL/min")
debug_print(f" - 预计蒸发量: {evaporation_volume:.2f}mL ({evaporation_volume/original_liquid_volume*100:.1f}%)")
# === 步骤6: 生成动作序列 ===
debug_print("📍 步骤6: 生成动作序列... 🎬")
action_sequence = []
# 1. 等待稳定
debug_print(" 🔄 动作1: 添加初始等待稳定... ⏳")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 10}
})
debug_print(" ✅ 初始等待动作已添加 ⏳✨")
# 2. 执行蒸发
debug_print(f" 🌪️ 动作2: 执行蒸发操作...")
debug_print(f" 🔧 设备: {rotavap_device}")
debug_print(f" 🥽 容器: {target_vessel}")
debug_print(f" 💨 真空度: {pressure} bar")
debug_print(f" 🌡️ 温度: {temp}°C")
debug_print(f" ⏰ 时间: {final_time}s ({final_time/60:.1f}分钟)")
debug_print(f" 🌪️ 旋转速度: {stir_speed} RPM")
evaporate_action = {
"device_id": rotavap_device,
"action_name": "evaporate",
@@ -221,17 +332,20 @@ def generate_evaporate_protocol(
"vessel": {"id": target_vessel},
"pressure": float(pressure),
"temp": float(temp),
"time": float(final_time),
"time": float(final_time), # 🔧 强制转换为float类型
"stir_speed": float(stir_speed),
"solvent": str(solvent)
}
}
action_sequence.append(evaporate_action)
# 蒸发过程中的体积变化
debug_print(" ✅ 蒸发动作已添加 🌪️✨")
# 🔧 新增:蒸发过程中的体积变化
debug_print(" 🔧 更新容器体积 - 蒸发过程...")
if evaporation_volume > 0:
new_volume = max(0.0, original_liquid_volume - evaporation_volume)
# 更新vessel字典中的体积
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
@@ -243,14 +357,15 @@ def generate_evaporate_protocol(
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"]["liquid_volume"] = new_volume
# 🔧 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = new_volume
@@ -258,16 +373,18 @@ def generate_evaporate_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [new_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = new_volume
debug_print(f"蒸发体积变化: {original_liquid_volume:.2f}mL -> {new_volume:.2f}mL (-{evaporation_volume:.2f}mL)")
debug_print(f" 📊 蒸发体积变化: {original_liquid_volume:.2f}mL {new_volume:.2f}mL (-{evaporation_volume:.2f}mL)")
# 3. 蒸发后等待
debug_print(" 🔄 动作3: 添加蒸发后等待... ⏳")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 10}
})
# 最终状态
debug_print(" ✅ 蒸发后等待动作已添加 ⏳✨")
# 🔧 新增:蒸发完成后的状态报告
final_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -275,7 +392,19 @@ def generate_evaporate_protocol(
final_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
final_liquid_volume = current_volume
debug_print(f"蒸发协议生成完成: {len(action_sequence)} 个动作, 设备={rotavap_device}, 容器={target_vessel}")
# === 总结 ===
debug_print("🎊" * 20)
debug_print(f"🎉 蒸发协议生成完成! ✨")
debug_print(f"📊 总动作数: {len(action_sequence)} 个 📝")
debug_print(f"🌪️ 旋转蒸发仪: {rotavap_device} 🔧")
debug_print(f"🥽 目标容器: {target_vessel} 🧪")
debug_print(f"⚙️ 蒸发参数: {pressure} bar 💨, {temp}°C 🌡️, {final_time}s ⏰, {stir_speed} RPM 🌪️")
debug_print(f"⏱️ 预计总时间: {(final_time + 20)/60:.1f} 分钟 ⌛")
debug_print(f"📊 体积变化:")
debug_print(f" - 蒸发前: {original_liquid_volume:.2f}mL")
debug_print(f" - 蒸发后: {final_liquid_volume:.2f}mL")
debug_print(f" - 蒸发量: {evaporation_volume:.2f}mL ({evaporation_volume/max(original_liquid_volume, 0.01)*100:.1f}%)")
debug_print("🎊" * 20)
return action_sequence

View File

@@ -2,64 +2,87 @@ from typing import List, Dict, Any, Optional
import networkx as nx
import logging
from .utils.vessel_parser import get_vessel
from .utils.logger_util import debug_print
from .pump_protocol import generate_pump_protocol_with_rinsing
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[FILTER] {message}")
def find_filter_device(G: nx.DiGraph) -> str:
"""查找过滤器设备"""
debug_print("🔍 查找过滤器设备... 🌊")
# 查找过滤器设备
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if 'filter' in node_class.lower() or 'filter' in node.lower():
debug_print(f"找到过滤器设备: {node}")
debug_print(f"🎉 找到过滤器设备: {node}")
return node
# 如果没找到,寻找可能的过滤器名称
debug_print("🔎 在预定义名称中搜索过滤器... 📋")
possible_names = ["filter", "filter_1", "virtual_filter", "filtration_unit"]
for name in possible_names:
if name in G.nodes():
debug_print(f"找到过滤器设备: {name}")
debug_print(f"🎉 找到过滤器设备: {name}")
return name
debug_print("😭 未找到过滤器设备 💔")
raise ValueError("未找到过滤器设备")
def validate_vessel(G: nx.DiGraph, vessel: str, vessel_type: str = "容器") -> None:
"""验证容器是否存在"""
debug_print(f"🔍 验证{vessel_type}: '{vessel}' 🧪")
if not vessel:
debug_print(f"{vessel_type}不能为空! 😱")
raise ValueError(f"{vessel_type}不能为空")
if vessel not in G.nodes():
debug_print(f"{vessel_type} '{vessel}' 不存在于系统中! 😞")
raise ValueError(f"{vessel_type} '{vessel}' 不存在于系统中")
debug_print(f"{vessel_type} '{vessel}' 验证通过 🎯")
def generate_filter_protocol(
G: nx.DiGraph,
vessel: dict,
vessel: dict, # 🔧 修改:从字符串改为字典类型
filtrate_vessel: dict = {"id": "waste"},
**kwargs
) -> List[Dict[str, Any]]:
"""
生成过滤操作的协议序列 - 支持体积运算
Args:
G: 设备图
vessel: 过滤容器字典(必需)- 包含需要过滤的混合物
filtrate_vessel: 滤液容器名称(可选)- 如果提供则收集滤液
**kwargs: 其他参数(兼容性)
Returns:
List[Dict[str, Any]]: 过滤操作的动作序列
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
filtrate_vessel_id, filtrate_vessel_data = get_vessel(filtrate_vessel)
debug_print(f"开始生成过滤协议: vessel={vessel_id}, filtrate_vessel={filtrate_vessel_id}")
debug_print("🌊" * 20)
debug_print("🚀 开始生成过滤协议(支持体积运算)✨")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 🧪 filtrate_vessel: {filtrate_vessel}")
debug_print(f" ⚙️ 其他参数: {kwargs}")
debug_print("🌊" * 20)
action_sequence = []
# 记录过滤前的容器状态
# 🔧 新增:记录过滤前的容器状态
debug_print("🔍 记录过滤前容器状态...")
original_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -67,45 +90,79 @@ def generate_filter_protocol(
original_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
original_liquid_volume = current_volume
debug_print(f"📊 过滤前液体体积: {original_liquid_volume:.2f}mL")
# === 参数验证 ===
validate_vessel(G, vessel_id, "过滤容器")
debug_print("📍 步骤1: 参数验证... 🔧")
# 验证必需参数
debug_print(" 🔍 验证必需参数...")
validate_vessel(G, vessel_id, "过滤容器") # 🔧 使用 vessel_id
debug_print(" ✅ 必需参数验证完成 🎯")
# 验证可选参数
debug_print(" 🔍 验证可选参数...")
if filtrate_vessel:
validate_vessel(G, filtrate_vessel_id, "滤液容器")
debug_print(" 🌊 模式: 过滤并收集滤液 💧")
else:
debug_print(" 🧱 模式: 过滤并收集固体 🔬")
debug_print(" ✅ 可选参数验证完成 🎯")
# === 查找设备 ===
debug_print("📍 步骤2: 查找设备... 🔍")
try:
debug_print(" 🔎 搜索过滤器设备...")
filter_device = find_filter_device(G)
debug_print(f"使用过滤器设备: {filter_device}")
debug_print(f" 🎉 使用过滤器设备: {filter_device} 🌊✨")
except Exception as e:
debug_print(f" ❌ 设备查找失败: {str(e)} 😭")
raise ValueError(f"设备查找失败: {str(e)}")
# 过滤体积分配估算
solid_ratio = 0.1
liquid_ratio = 0.9
volume_loss_ratio = 0.05
# 🔧 新增:过滤效率和体积分配估算
debug_print("📍 步骤2.5: 过滤体积分配估算... 📊")
# 估算过滤分离比例(基于经验数据)
solid_ratio = 0.1 # 假设10%是固体(保留在过滤器上)
liquid_ratio = 0.9 # 假设90%是液体(通过过滤器)
volume_loss_ratio = 0.05 # 假设5%体积损失(残留在过滤器等)
# 从kwargs中获取过滤参数进行优化
if "solid_content" in kwargs:
try:
solid_ratio = float(kwargs["solid_content"])
liquid_ratio = 1.0 - solid_ratio
debug_print(f"📋 使用指定的固体含量: {solid_ratio*100:.1f}%")
except:
pass
debug_print("⚠️ 固体含量参数无效,使用默认值")
if original_liquid_volume > 0:
expected_filtrate_volume = original_liquid_volume * liquid_ratio * (1.0 - volume_loss_ratio)
expected_solid_volume = original_liquid_volume * solid_ratio
volume_loss = original_liquid_volume * volume_loss_ratio
debug_print(f"📊 过滤体积分配估算:")
debug_print(f" - 原始体积: {original_liquid_volume:.2f}mL")
debug_print(f" - 预计滤液体积: {expected_filtrate_volume:.2f}mL ({liquid_ratio*100:.1f}%)")
debug_print(f" - 预计固体体积: {expected_solid_volume:.2f}mL ({solid_ratio*100:.1f}%)")
debug_print(f" - 预计损失体积: {volume_loss:.2f}mL ({volume_loss_ratio*100:.1f}%)")
# === 转移到过滤器(如果需要)===
if vessel_id != filter_device:
debug_print("📍 步骤3: 转移到过滤器... 🚚")
if vessel_id != filter_device: # 🔧 使用 vessel_id
debug_print(f" 🚛 需要转移: {vessel_id}{filter_device} 📦")
try:
debug_print(" 🔄 开始执行转移操作...")
# 使用pump protocol转移液体到过滤器
transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel={"id": vessel_id},
from_vessel={"id": vessel_id}, # 🔧 使用 vessel_id
to_vessel={"id": filter_device},
volume=0.0,
volume=0.0, # 转移所有液体
amount="",
time=0.0,
viscous=False,
@@ -116,59 +173,88 @@ def generate_filter_protocol(
flowrate=2.0,
transfer_flowrate=2.0
)
if transfer_actions:
action_sequence.extend(transfer_actions)
debug_print(f"添加了 {len(transfer_actions)} 个转移动作")
# 更新容器体积
debug_print(f"添加了 {len(transfer_actions)} 个转移动作 🚚✨")
# 🔧 新增:转移后更新容器体积
debug_print(" 🔧 更新转移后的容器体积...")
# 原容器体积变为0所有液体已转移
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
vessel["data"]["liquid_volume"] = [0.0] if len(current_volume) > 0 else [0.0]
else:
vessel["data"]["liquid_volume"] = 0.0
# 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
G.nodes[vessel_id]['data']['liquid_volume'] = 0.0
debug_print(f" 📊 转移完成,{vessel_id} 体积更新为 0.0mL")
else:
debug_print(" ⚠️ 转移协议返回空序列 🤔")
except Exception as e:
debug_print(f"转移失败: {str(e)},继续执行")
debug_print(f"转移失败: {str(e)} 😞")
debug_print(" 🔄 继续执行,可能是直接连接的过滤器 🤞")
else:
debug_print(" ✅ 过滤容器就是过滤器,无需转移 🎯")
# === 执行过滤操作 ===
debug_print("📍 步骤4: 执行过滤操作... 🌊")
# 构建过滤动作参数
debug_print(" ⚙️ 构建过滤参数...")
filter_kwargs = {
"vessel": {"id": filter_device},
"filtrate_vessel": {"id": filtrate_vessel_id},
"vessel": {"id": filter_device}, # 过滤器设备
"filtrate_vessel": {"id": filtrate_vessel_id}, # 滤液容器(可能为空)
"stir": kwargs.get("stir", False),
"stir_speed": kwargs.get("stir_speed", 0.0),
"temp": kwargs.get("temp", 25.0),
"continue_heatchill": kwargs.get("continue_heatchill", False),
"volume": kwargs.get("volume", 0.0)
"volume": kwargs.get("volume", 0.0) # 0表示过滤所有
}
debug_print(f" 📋 过滤参数: {filter_kwargs}")
debug_print(" 🌊 开始过滤操作...")
# 过滤动作
filter_action = {
"device_id": filter_device,
"action_name": "filter",
"action_kwargs": filter_kwargs
}
action_sequence.append(filter_action)
debug_print(" ✅ 过滤动作已添加 🌊✨")
# 过滤后等待
debug_print(" ⏳ 添加过滤后等待...")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 10.0}
})
debug_print(" ✅ 过滤后等待动作已添加 ⏰✨")
# === 收集滤液(如果需要)===
debug_print("📍 步骤5: 收集滤液... 💧")
if filtrate_vessel_id and filtrate_vessel_id not in G.neighbors(filter_device):
debug_print(f" 🧪 收集滤液: {filter_device}{filtrate_vessel_id} 💧")
try:
debug_print(" 🔄 开始执行收集操作...")
# 使用pump protocol收集滤液
collect_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=filter_device,
to_vessel=filtrate_vessel,
volume=0.0,
volume=0.0, # 收集所有滤液
amount="",
time=0.0,
viscous=False,
@@ -179,15 +265,19 @@ def generate_filter_protocol(
flowrate=2.0,
transfer_flowrate=2.0
)
if collect_actions:
action_sequence.extend(collect_actions)
# 更新滤液容器体积
debug_print(f" ✅ 添加了 {len(collect_actions)} 个收集动作 🧪✨")
# 🔧 新增:收集滤液后的体积更新
debug_print(" 🔧 更新滤液容器体积...")
# 更新filtrate_vessel在图中的体积如果它是节点
if filtrate_vessel_id in G.nodes():
if 'data' not in G.nodes[filtrate_vessel_id]:
G.nodes[filtrate_vessel_id]['data'] = {}
current_filtrate_volume = G.nodes[filtrate_vessel_id]['data'].get('liquid_volume', 0.0)
if isinstance(current_filtrate_volume, list):
if len(current_filtrate_volume) > 0:
@@ -196,37 +286,58 @@ def generate_filter_protocol(
G.nodes[filtrate_vessel_id]['data']['liquid_volume'] = [expected_filtrate_volume]
else:
G.nodes[filtrate_vessel_id]['data']['liquid_volume'] = current_filtrate_volume + expected_filtrate_volume
debug_print(f" 📊 滤液容器 {filtrate_vessel_id} 体积增加 {expected_filtrate_volume:.2f}mL")
else:
debug_print(" ⚠️ 收集协议返回空序列 🤔")
except Exception as e:
debug_print(f"收集滤液失败: {str(e)},继续执行")
# 过滤完成后容器状态更新
debug_print(f"收集滤液失败: {str(e)} 😞")
debug_print(" 🔄 继续执行,可能滤液直接流入指定容器 🤞")
else:
debug_print(" 🧱 未指定滤液容器,固体保留在过滤器中 🔬")
# 🔧 新增:过滤完成后的容器状态更新
debug_print("📍 步骤5.5: 过滤完成后状态更新... 📊")
if vessel_id == filter_device:
# 如果过滤容器就是过滤器,需要更新其体积状态
if original_liquid_volume > 0:
if filtrate_vessel:
# 收集滤液模式:过滤器中主要保留固体
remaining_volume = expected_solid_volume
debug_print(f" 🧱 过滤器中保留固体: {remaining_volume:.2f}mL")
else:
# 保留固体模式:过滤器中保留所有物质
remaining_volume = original_liquid_volume * (1.0 - volume_loss_ratio)
debug_print(f" 🔬 过滤器中保留所有物质: {remaining_volume:.2f}mL")
# 更新vessel字典中的体积
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
vessel["data"]["liquid_volume"] = [remaining_volume] if len(current_volume) > 0 else [remaining_volume]
else:
vessel["data"]["liquid_volume"] = remaining_volume
# 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
G.nodes[vessel_id]['data']['liquid_volume'] = remaining_volume
debug_print(f" 📊 过滤器 {vessel_id} 体积更新为: {remaining_volume:.2f}mL")
# === 最终等待 ===
debug_print("📍 步骤6: 最终等待... ⏰")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 5.0}
})
# 最终状态
debug_print(" ✅ 最终等待动作已添加 ⏰✨")
# 🔧 新增:过滤完成后的状态报告
final_vessel_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -234,7 +345,22 @@ def generate_filter_protocol(
final_vessel_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
final_vessel_volume = current_volume
debug_print(f"过滤协议生成完成: {len(action_sequence)} 个动作, 容器={vessel_id}, 过滤器={filter_device}")
# === 总结 ===
debug_print("🎊" * 20)
debug_print(f"🎉 过滤协议生成完成! ✨")
debug_print(f"📊 总动作数: {len(action_sequence)} 个 📝")
debug_print(f"🥽 过滤容器: {vessel_id} 🧪")
debug_print(f"🌊 过滤器设备: {filter_device} 🔧")
debug_print(f"💧 滤液容器: {filtrate_vessel_id or '无(保留固体)'} 🧱")
debug_print(f"⏱️ 预计总时间: {(len(action_sequence) * 5):.0f} 秒 ⌛")
if original_liquid_volume > 0:
debug_print(f"📊 体积变化统计:")
debug_print(f" - 过滤前体积: {original_liquid_volume:.2f}mL")
debug_print(f" - 过滤后容器体积: {final_vessel_volume:.2f}mL")
if filtrate_vessel:
debug_print(f" - 预计滤液体积: {expected_filtrate_volume:.2f}mL")
debug_print(f" - 预计损失体积: {volume_loss:.2f}mL")
debug_print("🎊" * 20)
return action_sequence

View File

@@ -1,24 +1,118 @@
from typing import List, Dict, Any, Union
import networkx as nx
from .utils.vessel_parser import get_vessel, find_connected_heatchill
from .utils.unit_parser import parse_time_input, parse_temperature_input
from .utils.logger_util import debug_print
import logging
import re
from .utils.vessel_parser import get_vessel
from .utils.unit_parser import parse_time_input
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[HEATCHILL] {message}")
def parse_temp_input(temp_input: Union[str, float], default_temp: float = 25.0) -> float:
"""
解析温度输入(统一函数)
Args:
temp_input: 温度输入
default_temp: 默认温度
Returns:
float: 温度°C
"""
if not temp_input:
return default_temp
# 🔢 数值输入
if isinstance(temp_input, (int, float)):
result = float(temp_input)
debug_print(f"🌡️ 数值温度: {temp_input}{result}°C")
return result
# 📝 字符串输入
temp_str = str(temp_input).lower().strip()
debug_print(f"🔍 解析温度: '{temp_str}'")
# 🎯 特殊温度
special_temps = {
"room temperature": 25.0, "reflux": 78.0, "ice bath": 0.0,
"boiling": 100.0, "hot": 60.0, "warm": 40.0, "cold": 10.0
}
if temp_str in special_temps:
result = special_temps[temp_str]
debug_print(f"🎯 特殊温度: '{temp_str}'{result}°C")
return result
# 📐 正则解析(如 "256 °C"
temp_pattern = r'(\d+(?:\.\d+)?)\s*°?[cf]?'
match = re.search(temp_pattern, temp_str)
if match:
result = float(match.group(1))
debug_print(f"✅ 温度解析: '{temp_str}'{result}°C")
return result
debug_print(f"⚠️ 无法解析温度: '{temp_str}',使用默认值: {default_temp}°C")
return default_temp
def find_connected_heatchill(G: nx.DiGraph, vessel: str) -> str:
"""查找与指定容器相连的加热/冷却设备"""
debug_print(f"🔍 查找加热设备,目标容器: {vessel}")
# 🔧 查找所有加热设备
heatchill_nodes = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if 'heatchill' in node_class.lower() or 'virtual_heatchill' in node_class:
heatchill_nodes.append(node)
debug_print(f"🎉 找到加热设备: {node}")
# 🔗 检查连接
if vessel and heatchill_nodes:
for heatchill in heatchill_nodes:
if G.has_edge(heatchill, vessel) or G.has_edge(vessel, heatchill):
debug_print(f"✅ 加热设备 '{heatchill}' 与容器 '{vessel}' 相连")
return heatchill
# 🎯 使用第一个可用设备
if heatchill_nodes:
selected = heatchill_nodes[0]
debug_print(f"🔧 使用第一个加热设备: {selected}")
return selected
# 🆘 默认设备
debug_print("⚠️ 未找到加热设备,使用默认设备")
return "heatchill_1"
def validate_and_fix_params(temp: float, time: float, stir_speed: float) -> tuple:
"""验证和修正参数"""
# 🌡️ 温度范围验证
if temp < -50.0 or temp > 300.0:
debug_print(f"⚠️ 温度 {temp}°C 超出范围,修正为 25°C")
temp = 25.0
else:
debug_print(f"✅ 温度 {temp}°C 在正常范围内")
# ⏰ 时间验证
if time < 0:
debug_print(f"⚠️ 时间 {time}s 无效,修正为 300s")
time = 300.0
else:
debug_print(f"✅ 时间 {time}s ({time/60:.1f}分钟) 有效")
# 🌪️ 搅拌速度验证
if stir_speed < 0 or stir_speed > 1500.0:
debug_print(f"⚠️ 搅拌速度 {stir_speed} RPM 超出范围,修正为 300 RPM")
stir_speed = 300.0
else:
debug_print(f"✅ 搅拌速度 {stir_speed} RPM 在正常范围内")
return temp, time, stir_speed
def generate_heat_chill_protocol(
@@ -37,7 +131,7 @@ def generate_heat_chill_protocol(
) -> List[Dict[str, Any]]:
"""
生成加热/冷却操作的协议序列 - 支持vessel字典
Args:
G: 设备图
vessel: 容器字典从XDL传入
@@ -51,58 +145,82 @@ def generate_heat_chill_protocol(
stir_speed: 搅拌速度 (RPM)
purpose: 操作目的说明
**kwargs: 其他参数(兼容性)
Returns:
List[Dict[str, Any]]: 加热/冷却操作的动作序列
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
debug_print(f"开始生成加热冷却协议: vessel={vessel_id}, temp={temp}°C, "
f"time={time}, stir={stir} ({stir_speed} RPM), purpose='{purpose}'")
# 参数验证
if not vessel_id:
debug_print("🌡️" * 20)
debug_print("🚀 开始生成加热冷却协议支持vessel字典")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 🌡️ temp: {temp}°C")
debug_print(f" ⏰ time: {time}")
debug_print(f" 🎯 temp_spec: {temp_spec}")
debug_print(f" ⏱️ time_spec: {time_spec}")
debug_print(f" 🌪️ stir: {stir} ({stir_speed} RPM)")
debug_print(f" 🎭 purpose: '{purpose}'")
debug_print("🌡️" * 20)
# 📋 参数验证
debug_print("📍 步骤1: 参数验证... 🔧")
if not vessel_id: # 🔧 使用 vessel_id
debug_print("❌ vessel 参数不能为空! 😱")
raise ValueError("vessel 参数不能为空")
if vessel_id not in G.nodes():
if vessel_id not in G.nodes(): # 🔧 使用 vessel_id
debug_print(f"❌ 容器 '{vessel_id}' 不存在于系统中! 😞")
raise ValueError(f"容器 '{vessel_id}' 不存在于系统中")
# 参数解析
# 温度解析:优先使用 temp_spec
final_temp = parse_temperature_input(temp_spec, temp) if temp_spec else temp
debug_print("✅ 基础参数验证通过 🎯")
# 🔄 参数解析
debug_print("📍 步骤2: 参数解析... ⚡")
#温度解析:优先使用 temp_spec
final_temp = parse_temp_input(temp_spec, temp) if temp_spec else temp
# 时间解析:优先使用 time_spec
final_time = parse_time_input(time_spec) if time_spec else parse_time_input(time)
# 参数修正
final_temp, final_time, stir_speed = validate_and_fix_params(final_temp, final_time, stir_speed)
debug_print(f"最终参数: temp={final_temp}°C, time={final_time}s, stir_speed={stir_speed} RPM")
# 查找设备
debug_print(f"🎯 最终参数: temp={final_temp}°C, time={final_time}s, stir_speed={stir_speed} RPM")
# 🔍 查找设备
debug_print("📍 步骤3: 查找加热设备... 🔍")
try:
heatchill_id = find_connected_heatchill(G, vessel_id)
debug_print(f"使用加热设备: {heatchill_id}")
heatchill_id = find_connected_heatchill(G, vessel_id) # 🔧 使用 vessel_id
debug_print(f"🎉 使用加热设备: {heatchill_id}")
except Exception as e:
debug_print(f"❌ 设备查找失败: {str(e)} 😭")
raise ValueError(f"无法找到加热设备: {str(e)}")
# 生成动作
# 模拟运行时间优化
# 🚀 生成动作
debug_print("📍 步骤4: 生成加热动作... 🔥")
# 🕐 模拟运行时间优化
debug_print(" ⏱️ 检查模拟运行时间限制...")
original_time = final_time
simulation_time_limit = 100.0 # 模拟运行时间限制100秒
if final_time > simulation_time_limit:
final_time = simulation_time_limit
debug_print(f"模拟运行优化: {original_time}s → {final_time}s (限制为{simulation_time_limit}s)")
debug_print(f" 🎮 模拟运行优化: {original_time}s → {final_time}s (限制为{simulation_time_limit}s)")
debug_print(f" 📊 时间缩短: {original_time/60:.1f}分钟 → {final_time/60:.1f}分钟 🚀")
else:
debug_print(f" ✅ 时间在限制内: {final_time}s ({final_time/60:.1f}分钟) 保持不变 🎯")
action_sequence = []
heatchill_action = {
"device_id": heatchill_id,
"action_name": "heat_chill",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel},
"temp": float(final_temp),
"time": float(final_time),
"stir": bool(stir),
@@ -111,10 +229,21 @@ def generate_heat_chill_protocol(
}
}
action_sequence.append(heatchill_action)
debug_print(f"加热冷却协议生成完成: {len(action_sequence)} 个动作, "
f"vessel={vessel_id}, temp={final_temp}°C, time={final_time}s")
debug_print("✅ 加热动作已添加 🔥✨")
# 显示时间调整信息
if original_time != final_time:
debug_print(f" 🎭 模拟优化说明: 原计划 {original_time/60:.1f}分钟,实际模拟 {final_time/60:.1f}分钟 ⚡")
# 🎊 总结
debug_print("🎊" * 20)
debug_print(f"🎉 加热冷却协议生成完成! ✨")
debug_print(f"📊 总动作数: {len(action_sequence)}")
debug_print(f"🥽 加热容器: {vessel_id}")
debug_print(f"🌡️ 目标温度: {final_temp}°C")
debug_print(f"⏰ 加热时间: {final_time}s ({final_time/60:.1f}分钟)")
debug_print("🎊" * 20)
return action_sequence
def generate_heat_chill_to_temp_protocol(
@@ -126,7 +255,7 @@ def generate_heat_chill_to_temp_protocol(
) -> List[Dict[str, Any]]:
"""生成加热到指定温度的协议(简化版)"""
vessel_id, _ = get_vessel(vessel)
debug_print(f"生成加热到温度协议: {vessel_id}{temp}°C")
debug_print(f"🌡️ 生成加热到温度协议: {vessel_id}{temp}°C")
return generate_heat_chill_protocol(G, vessel, temp, time, **kwargs)
def generate_heat_chill_start_protocol(
@@ -137,19 +266,21 @@ def generate_heat_chill_start_protocol(
**kwargs
) -> List[Dict[str, Any]]:
"""生成开始加热操作的协议序列"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, _ = get_vessel(vessel)
debug_print(f"生成启动加热协议: vessel={vessel_id}, temp={temp}°C")
debug_print("🔥 开始生成启动加热协议")
debug_print(f"🥽 vessel: {vessel} (ID: {vessel_id}), 🌡️ temp: {temp}°C")
# 基础验证
if not vessel_id or vessel_id not in G.nodes():
if not vessel_id or vessel_id not in G.nodes(): # 🔧 使用 vessel_id
debug_print("❌ 容器验证失败!")
raise ValueError("vessel 参数无效")
# 查找设备
heatchill_id = find_connected_heatchill(G, vessel_id)
heatchill_id = find_connected_heatchill(G, vessel_id) # 🔧 使用 vessel_id
# 生成动作
action_sequence = [{
"device_id": heatchill_id,
@@ -160,8 +291,8 @@ def generate_heat_chill_start_protocol(
"vessel": {"id": vessel_id},
}
}]
debug_print(f"启动加热协议生成完成")
debug_print(f"启动加热协议生成完成 🎯")
return action_sequence
def generate_heat_chill_stop_protocol(
@@ -170,19 +301,21 @@ def generate_heat_chill_stop_protocol(
**kwargs
) -> List[Dict[str, Any]]:
"""生成停止加热操作的协议序列"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, _ = get_vessel(vessel)
debug_print(f"生成停止加热协议: vessel={vessel_id}")
debug_print("🛑 开始生成停止加热协议")
debug_print(f"🥽 vessel: {vessel} (ID: {vessel_id})")
# 基础验证
if not vessel_id or vessel_id not in G.nodes():
if not vessel_id or vessel_id not in G.nodes(): # 🔧 使用 vessel_id
debug_print("❌ 容器验证失败!")
raise ValueError("vessel 参数无效")
# 查找设备
heatchill_id = find_connected_heatchill(G, vessel_id)
heatchill_id = find_connected_heatchill(G, vessel_id) # 🔧 使用 vessel_id
# 生成动作
action_sequence = [{
"device_id": heatchill_id,
@@ -190,6 +323,6 @@ def generate_heat_chill_stop_protocol(
"action_kwargs": {
}
}]
debug_print(f"停止加热协议生成完成")
debug_print(f"停止加热协议生成完成 🎯")
return action_sequence

View File

@@ -1,50 +1,105 @@
import networkx as nx
from typing import List, Dict, Any, Optional
from .utils.vessel_parser import get_vessel
from .utils.logger_util import debug_print
from .utils.unit_parser import parse_temperature_input, parse_time_input
def parse_temperature(temp_str: str) -> float:
"""
解析温度字符串,支持多种格式
Args:
temp_str: 温度字符串(如 "45 °C", "45°C", "45"
Returns:
float: 温度值(摄氏度)
"""
try:
# 移除常见的温度单位和符号
temp_clean = temp_str.replace("°C", "").replace("°", "").replace("C", "").strip()
return float(temp_clean)
except ValueError:
print(f"HYDROGENATE: 无法解析温度 '{temp_str}',使用默认温度 25°C")
return 25.0
def parse_time(time_str: str) -> float:
"""
解析时间字符串,支持多种格式
Args:
time_str: 时间字符串(如 "2 h", "120 min", "7200 s"
Returns:
float: 时间值(秒)
"""
try:
time_clean = time_str.lower().strip()
# 处理小时
if "h" in time_clean:
hours = float(time_clean.replace("h", "").strip())
return hours * 3600.0
# 处理分钟
if "min" in time_clean:
minutes = float(time_clean.replace("min", "").strip())
return minutes * 60.0
# 处理秒
if "s" in time_clean:
seconds = float(time_clean.replace("s", "").strip())
return seconds
# 默认按小时处理
return float(time_clean) * 3600.0
except ValueError:
print(f"HYDROGENATE: 无法解析时间 '{time_str}',使用默认时间 2小时")
return 7200.0 # 2小时
def find_associated_solenoid_valve(G: nx.DiGraph, device_id: str) -> Optional[str]:
"""查找与指定设备相关联的电磁阀"""
solenoid_valves = [
node for node in G.nodes()
node for node in G.nodes()
if ('solenoid' in (G.nodes[node].get('class') or '').lower()
or 'solenoid_valve' in node)
]
# 通过网络连接查找直接相连的电磁阀
for solenoid in solenoid_valves:
if G.has_edge(device_id, solenoid) or G.has_edge(solenoid, device_id):
return solenoid
# 通过命名规则查找关联的电磁阀
device_type = ""
if 'gas' in device_id.lower():
device_type = "gas"
elif 'h2' in device_id.lower() or 'hydrogen' in device_id.lower():
device_type = "gas"
if device_type:
for solenoid in solenoid_valves:
if device_type in solenoid.lower():
return solenoid
return None
def find_connected_device(G: nx.DiGraph, vessel: str, device_type: str) -> str:
"""
查找与容器相连的指定类型设备
Args:
G: 网络图
vessel: 容器名称
device_type: 设备类型 ('heater', 'stirrer', 'gas_source')
Returns:
str: 设备ID如果没有则返回None
"""
print(f"HYDROGENATE: 正在查找与容器 '{vessel}' 相连的 {device_type}...")
# 根据设备类型定义搜索关键词
if device_type == 'heater':
keywords = ['heater', 'heat', 'heatchill']
@@ -57,38 +112,40 @@ def find_connected_device(G: nx.DiGraph, vessel: str, device_type: str) -> str:
device_class = 'virtual_gas_source'
else:
return None
# 查找设备节点
device_nodes = []
for node in G.nodes():
node_data = G.nodes[node]
node_name = node.lower()
node_class = node_data.get('class', '').lower()
# 通过名称匹配
if any(keyword in node_name for keyword in keywords):
device_nodes.append(node)
# 通过类型匹配
elif device_class in node_class:
device_nodes.append(node)
debug_print(f"找到的{device_type}节点: {device_nodes}")
print(f"HYDROGENATE: 找到的{device_type}节点: {device_nodes}")
# 检查是否有设备与目标容器相连
for device in device_nodes:
if G.has_edge(device, vessel) or G.has_edge(vessel, device):
debug_print(f"找到与容器 '{vessel}' 相连的{device_type}: {device}")
print(f"HYDROGENATE: 找到与容器 '{vessel}' 相连的{device_type}: {device}")
return device
# 如果没有直接连接,查找距离最近的设备
for device in device_nodes:
try:
path = nx.shortest_path(G, source=device, target=vessel)
if len(path) <= 3: # 最多2个中间节点
debug_print(f"找到距离较近的{device_type}: {device}")
print(f"HYDROGENATE: 找到距离较近的{device_type}: {device}")
return device
except nx.NetworkXNoPath:
continue
debug_print(f"未找到与容器 '{vessel}' 相连的{device_type}")
print(f"HYDROGENATE: 未找到与容器 '{vessel}' 相连的{device_type}")
return None
@@ -101,31 +158,36 @@ def generate_hydrogenate_protocol(
) -> List[Dict[str, Any]]:
"""
生成氢化反应协议序列 - 支持vessel字典
Args:
G: 有向图,节点为容器和设备
vessel: 反应容器字典从XDL传入
temp: 反应温度(如 "45 °C"
time: 反应时间(如 "2 h"
**kwargs: 其他可选参数,但不使用
Returns:
List[Dict[str, Any]]: 动作序列
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
action_sequence = []
# 解析参数
temperature = parse_temperature_input(temp)
reaction_time = parse_time_input(time)
debug_print(f"开始生成氢化反应协议: vessel={vessel_id}, "
f"temp={temperature}°C, time={reaction_time/3600:.1f}h")
# 记录氢化前的容器状态
temperature = parse_temperature(temp)
reaction_time = parse_time(time)
print("🧪" * 20)
print(f"HYDROGENATE: 开始生成氢化反应协议支持vessel字典")
print(f"📝 输入参数:")
print(f" 🥽 vessel: {vessel} (ID: {vessel_id})")
print(f" 🌡️ 反应温度: {temperature}°C")
print(f" ⏰ 反应时间: {reaction_time/3600:.1f} 小时")
print("🧪" * 20)
# 🔧 新增:记录氢化前的容器状态(可选,氢化反应通常不改变体积)
original_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -133,36 +195,47 @@ def generate_hydrogenate_protocol(
original_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
original_liquid_volume = current_volume
print(f"📊 氢化前液体体积: {original_liquid_volume:.2f}mL")
# 1. 验证目标容器存在
if vessel_id not in G.nodes():
debug_print(f"⚠️ 容器 '{vessel_id}' 不存在于系统中,跳过氢化反应")
print("📍 步骤1: 验证目标容器...")
if vessel_id not in G.nodes(): # 🔧 使用 vessel_id
print(f"⚠️ HYDROGENATE: 警告 - 容器 '{vessel_id}' 不存在于系统中,跳过氢化反应")
return action_sequence
print(f"✅ 容器 '{vessel_id}' 验证通过")
# 2. 查找相连的设备
heater_id = find_connected_device(G, vessel_id, 'heater')
stirrer_id = find_connected_device(G, vessel_id, 'stirrer')
gas_source_id = find_connected_device(G, vessel_id, 'gas_source')
debug_print(f"设备配置: heater={heater_id or '未找到'}, "
f"stirrer={stirrer_id or '未找到'}, gas={gas_source_id or '未找到'}")
print("📍 步骤2: 查找相连设备...")
heater_id = find_connected_device(G, vessel_id, 'heater') # 🔧 使用 vessel_id
stirrer_id = find_connected_device(G, vessel_id, 'stirrer') # 🔧 使用 vessel_id
gas_source_id = find_connected_device(G, vessel_id, 'gas_source') # 🔧 使用 vessel_id
print(f"🔧 设备配置:")
print(f" 🔥 加热器: {heater_id or '未找到'}")
print(f" 🌪️ 搅拌器: {stirrer_id or '未找到'}")
print(f" 💨 气源: {gas_source_id or '未找到'}")
# 3. 启动搅拌器
print("📍 步骤3: 启动搅拌器...")
if stirrer_id:
print(f"🌪️ 启动搅拌器 {stirrer_id}")
action_sequence.append({
"device_id": stirrer_id,
"action_name": "start_stir",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": vessel_id, # 🔧 使用 vessel_id
"stir_speed": 300.0,
"purpose": "氢化反应: 开始搅拌"
}
})
print("✅ 搅拌器启动动作已添加")
else:
debug_print(f"⚠️ 未找到搅拌器,继续执行")
print(f"⚠️ HYDROGENATE: 警告 - 未找到搅拌器,继续执行")
# 4. 启动气源(氢气)
print("📍 步骤4: 启动氢气源...")
if gas_source_id:
print(f"💨 启动气源 {gas_source_id} (氢气)")
action_sequence.append({
"device_id": gas_source_id,
"action_name": "set_status",
@@ -170,10 +243,11 @@ def generate_hydrogenate_protocol(
"string": "ON"
}
})
# 查找相关的电磁阀
gas_solenoid = find_associated_solenoid_valve(G, gas_source_id)
if gas_solenoid:
print(f"🚪 开启气源电磁阀 {gas_solenoid}")
action_sequence.append({
"device_id": gas_solenoid,
"action_name": "set_valve_position",
@@ -181,10 +255,12 @@ def generate_hydrogenate_protocol(
"command": "OPEN"
}
})
print("✅ 氢气源启动动作已添加")
else:
debug_print(f"⚠️ 未找到气源,继续执行")
print(f"⚠️ HYDROGENATE: 警告 - 未找到气源,继续执行")
# 5. 等待气体稳定
print("📍 步骤5: 等待气体环境稳定...")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
@@ -192,19 +268,22 @@ def generate_hydrogenate_protocol(
"description": "等待氢气环境稳定"
}
})
print("✅ 气体稳定等待动作已添加")
# 6. 启动加热器
print("📍 步骤6: 启动加热反应...")
if heater_id:
print(f"🔥 启动加热器 {heater_id}{temperature}°C")
action_sequence.append({
"device_id": heater_id,
"action_name": "heat_chill_start",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": vessel_id, # 🔧 使用 vessel_id
"temp": temperature,
"purpose": f"氢化反应: 加热到 {temperature}°C"
}
})
# 等待温度稳定
action_sequence.append({
"action_name": "wait",
@@ -213,38 +292,52 @@ def generate_hydrogenate_protocol(
"description": f"等待温度稳定到 {temperature}°C"
}
})
# 模拟运行时间优化
# 🕐 模拟运行时间优化
print(" ⏰ 检查模拟运行时间限制...")
original_reaction_time = reaction_time
simulation_time_limit = 60.0
simulation_time_limit = 60.0 # 模拟运行时间限制60秒
if reaction_time > simulation_time_limit:
reaction_time = simulation_time_limit
debug_print(f"模拟运行优化: {original_reaction_time}s → {reaction_time}s")
print(f" 🎮 模拟运行优化: {original_reaction_time}s → {reaction_time}s (限制为{simulation_time_limit}s)")
print(f" 📊 时间缩短: {original_reaction_time/3600:.2f}小时 → {reaction_time/60:.1f}分钟")
else:
print(f" ✅ 时间在限制内: {reaction_time}s ({reaction_time/60:.1f}分钟) 保持不变")
# 保持反应温度
action_sequence.append({
"device_id": heater_id,
"action_name": "heat_chill",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": vessel_id, # 🔧 使用 vessel_id
"temp": temperature,
"time": reaction_time,
"purpose": f"氢化反应: 保持 {temperature}°C反应 {reaction_time/60:.1f}分钟" + (f" (模拟时间)" if original_reaction_time != reaction_time else "")
}
})
# 显示时间调整信息
if original_reaction_time != reaction_time:
print(f" 🎭 模拟优化说明: 原计划 {original_reaction_time/3600:.2f}小时,实际模拟 {reaction_time/60:.1f}分钟")
print("✅ 加热反应动作已添加")
else:
debug_print(f"⚠️ 未找到加热器,使用室温反应")
# 室温反应也需要时间优化
print(f"⚠️ HYDROGENATE: 警告 - 未找到加热器,使用室温反应")
# 🕐 室温反应也需要时间优化
print(" ⏰ 检查室温反应模拟时间限制...")
original_reaction_time = reaction_time
simulation_time_limit = 60.0
simulation_time_limit = 60.0 # 模拟运行时间限制60秒
if reaction_time > simulation_time_limit:
reaction_time = simulation_time_limit
debug_print(f"室温反应时间优化: {original_reaction_time}s → {reaction_time}s")
print(f" 🎮 室温反应时间优化: {original_reaction_time}s → {reaction_time}s")
print(f" 📊 时间缩短: {original_reaction_time/3600:.2f}小时 → {reaction_time/60:.1f}分钟")
else:
print(f" ✅ 室温反应时间在限制内: {reaction_time}s 保持不变")
# 室温反应,只等待时间
action_sequence.append({
"action_name": "wait",
@@ -253,19 +346,28 @@ def generate_hydrogenate_protocol(
"description": f"室温氢化反应 {reaction_time/60:.1f}分钟" + (f" (模拟时间)" if original_reaction_time != reaction_time else "")
}
})
# 显示时间调整信息
if original_reaction_time != reaction_time:
print(f" 🎭 室温反应优化说明: 原计划 {original_reaction_time/3600:.2f}小时,实际模拟 {reaction_time/60:.1f}分钟")
print("✅ 室温反应等待动作已添加")
# 7. 停止加热
print("📍 步骤7: 停止加热...")
if heater_id:
action_sequence.append({
"device_id": heater_id,
"action_name": "heat_chill_stop",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": vessel_id, # 🔧 使用 vessel_id
"purpose": "氢化反应完成,停止加热"
}
})
print("✅ 停止加热动作已添加")
# 8. 等待冷却
print("📍 步骤8: 等待冷却...")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
@@ -273,12 +375,15 @@ def generate_hydrogenate_protocol(
"description": "等待反应混合物冷却"
}
})
print("✅ 冷却等待动作已添加")
# 9. 停止气源
print("📍 步骤9: 停止氢气源...")
if gas_source_id:
# 先关闭电磁阀
gas_solenoid = find_associated_solenoid_valve(G, gas_source_id)
if gas_solenoid:
print(f"🚪 关闭气源电磁阀 {gas_solenoid}")
action_sequence.append({
"device_id": gas_solenoid,
"action_name": "set_valve_position",
@@ -286,7 +391,7 @@ def generate_hydrogenate_protocol(
"command": "CLOSED"
}
})
# 再关闭气源
action_sequence.append({
"device_id": gas_source_id,
@@ -295,24 +400,59 @@ def generate_hydrogenate_protocol(
"string": "OFF"
}
})
print("✅ 氢气源停止动作已添加")
# 10. 停止搅拌
print("📍 步骤10: 停止搅拌...")
if stirrer_id:
action_sequence.append({
"device_id": stirrer_id,
"action_name": "stop_stir",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": vessel_id, # 🔧 使用 vessel_id
"purpose": "氢化反应完成,停止搅拌"
}
})
# 氢化完成后的状态(氢化反应通常不改变体积)
final_liquid_volume = original_liquid_volume
print("✅ 停止搅拌动作已添加")
# 🔧 新增:氢化完成后的状态(氢化反应通常不改变体积)
final_liquid_volume = original_liquid_volume # 氢化反应体积基本不变
# 总结
debug_print(f"氢化反应协议生成完成: {len(action_sequence)} 个动作, "
f"vessel={vessel_id}, temp={temperature}°C, time={reaction_time/60:.1f}min, "
f"volume={original_liquid_volume:.2f}{final_liquid_volume:.2f}mL")
print("🎊" * 20)
print(f"🎉 氢化反应协议生成完成! ✨")
print(f"📊 总动作数: {len(action_sequence)}")
print(f"🥽 反应容器: {vessel_id}")
print(f"🌡️ 反应温度: {temperature}°C")
print(f"⏰ 反应时间: {reaction_time/60:.1f}分钟")
print(f"⏱️ 预计总时间: {(reaction_time + 450)/3600:.1f} 小时")
print(f"📊 体积状态:")
print(f" - 反应前体积: {original_liquid_volume:.2f}mL")
print(f" - 反应后体积: {final_liquid_volume:.2f}mL (氢化反应体积基本不变)")
print("🎊" * 20)
return action_sequence
# 测试函数
def test_hydrogenate_protocol():
"""测试氢化反应协议"""
print("🧪 === HYDROGENATE PROTOCOL 测试 === ✨")
# 测试温度解析
test_temps = ["45 °C", "45°C", "45", "25 C", "invalid"]
for temp in test_temps:
parsed = parse_temperature(temp)
print(f"温度 '{temp}' -> {parsed}°C")
# 测试时间解析
test_times = ["2 h", "120 min", "7200 s", "2", "invalid"]
for time in test_times:
parsed = parse_time(time)
print(f"时间 '{time}' -> {parsed/3600:.1f} 小时")
print("✅ 测试完成 🎉")
if __name__ == "__main__":
test_hydrogenate_protocol()

View File

@@ -2,116 +2,205 @@ import traceback
import numpy as np
import networkx as nx
import asyncio
import time as time_module # 重命名time模块
import time as time_module # 🔧 重命名time模块
from typing import List, Dict, Any
import logging
import sys
from .utils.logger_util import debug_print
from .utils.vessel_parser import get_vessel
from .utils.resource_helper import get_resource_liquid_volume
from unilabos.compile.utils.vessel_parser import get_vessel
logger = logging.getLogger(__name__)
def is_integrated_pump(node_class: str, node_name: str = "") -> bool:
def debug_print(message):
"""强制输出调试信息"""
output = f"[TRANSFER] {message}"
logger.info(output)
def get_vessel_liquid_volume(G: nx.DiGraph, vessel: str) -> float:
"""
判断是否为泵阀一体设备
从容器节点的数据中获取液体体积
"""
class_lower = (node_class or "").lower()
name_lower = (node_name or "").lower()
debug_print(f"🔍 开始读取容器 '{vessel}' 的液体体积...")
if "pump" not in class_lower and "pump" not in name_lower:
return False
if vessel not in G.nodes():
logger.error(f"❌ 容器 '{vessel}' 不存在于系统图中")
debug_print(f" - 系统中的容器: {list(G.nodes())}")
return 0.0
integrated_markers = [
"valve",
"pump_valve",
"pumpvalve",
"integrated",
"transfer_pump",
]
vessel_data = G.nodes[vessel].get('data', {})
debug_print(f"📋 容器 '{vessel}' 的数据结构: {vessel_data}")
for marker in integrated_markers:
if marker in class_lower or marker in name_lower:
return True
total_volume = 0.0
return False
# 方法1检查 'liquid' 字段(列表格式)
debug_print("🔍 方法1: 检查 'liquid' 字段...")
if 'liquid' in vessel_data:
liquids = vessel_data['liquid']
debug_print(f" - liquid 字段类型: {type(liquids)}")
debug_print(f" - liquid 字段内容: {liquids}")
if isinstance(liquids, list):
debug_print(f" - liquid 是列表,包含 {len(liquids)} 个元素")
for i, liquid in enumerate(liquids):
debug_print(f" 液体 {i + 1}: {liquid}")
if isinstance(liquid, dict):
volume_keys = ['liquid_volume', 'volume', 'amount', 'quantity']
for key in volume_keys:
if key in liquid:
try:
vol = float(liquid[key])
total_volume += vol
debug_print(f" ✅ 从 '{key}' 读取体积: {vol}mL")
break
except (ValueError, TypeError) as e:
logger.warning(f" ⚠️ 无法转换 '{key}': {liquid[key]} -> {str(e)}")
continue
else:
debug_print(f" - liquid 不是列表: {type(liquids)}")
else:
debug_print(" - 没有 'liquid' 字段")
# 方法2检查直接的体积字段
debug_print("🔍 方法2: 检查直接体积字段...")
volume_keys = ['total_volume', 'volume', 'liquid_volume', 'amount', 'current_volume']
for key in volume_keys:
if key in vessel_data:
try:
vol = float(vessel_data[key])
total_volume = max(total_volume, vol) # 取最大值
debug_print(f" ✅ 从容器数据 '{key}' 读取体积: {vol}mL")
break
except (ValueError, TypeError) as e:
logger.warning(f" ⚠️ 无法转换 '{key}': {vessel_data[key]} -> {str(e)}")
continue
# 方法3检查 'state' 或 'status' 字段
debug_print("🔍 方法3: 检查 'state' 字段...")
if 'state' in vessel_data and isinstance(vessel_data['state'], dict):
state = vessel_data['state']
debug_print(f" - state 字段内容: {state}")
if 'volume' in state:
try:
vol = float(state['volume'])
total_volume = max(total_volume, vol)
debug_print(f" ✅ 从容器状态读取体积: {vol}mL")
except (ValueError, TypeError) as e:
logger.warning(f" ⚠️ 无法转换 state.volume: {state['volume']} -> {str(e)}")
else:
debug_print(" - 没有 'state' 字段或不是字典")
debug_print(f"📊 容器 '{vessel}' 最终检测体积: {total_volume}mL")
return total_volume
def is_integrated_pump(node_name):
return "pump" in node_name and "valve" in node_name
def find_connected_pump(G, valve_node):
"""
查找与阀门相连的泵节点
区分电磁阀和多通阀,电磁阀不参与泵查找
查找与阀门相连的泵节点 - 修复版本
🔧 修复:区分电磁阀和多通阀,电磁阀不参与泵查找
"""
# 检查节点类型,电磁阀不应该查找泵
debug_print(f"🔍 查找与阀门 {valve_node} 相连的泵...")
# 🔧 关键修复:检查节点类型,电磁阀不应该查找泵
node_data = G.nodes.get(valve_node, {})
node_class = node_data.get("class", "") or ""
debug_print(f" - 阀门类型: {node_class}")
# 如果是电磁阀,不应该查找泵(电磁阀只是开关)
if ("solenoid" in node_class.lower() or "solenoid_valve" in valve_node.lower()):
debug_print(f" ⚠️ {valve_node} 是电磁阀,不应该查找泵节点")
raise ValueError(f"电磁阀 {valve_node} 不应该参与泵查找逻辑")
# 只有多通阀等复杂阀门才需要查找连接的泵
if ("multiway" in node_class.lower() or "valve" in node_class.lower()):
debug_print(f" - {valve_node} 是多通阀,查找连接的泵...")
# 方法1直接相邻的泵
for neighbor in G.neighbors(valve_node):
neighbor_class = G.nodes[neighbor].get("class", "") or ""
# 排除非 电磁阀 和 泵 的邻居
debug_print(f" - 检查邻居 {neighbor}, class: {neighbor_class}")
if "pump" in neighbor_class.lower():
debug_print(f" ✅ 找到直接相连的泵: {neighbor}")
return neighbor
# 方法2通过路径查找泵最多2跳
pump_nodes = [
node_id for node_id in G.nodes()
if "pump" in (G.nodes[node_id].get("class", "") or "").lower()
]
debug_print(f" - 未找到直接相连的泵,尝试路径查找...")
# 获取所有泵节点
pump_nodes = []
for node_id in G.nodes():
node_class = G.nodes[node_id].get("class", "") or ""
if "pump" in node_class.lower():
pump_nodes.append(node_id)
debug_print(f" - 系统中的泵节点: {pump_nodes}")
# 查找到泵的最短路径
for pump_node in pump_nodes:
try:
if nx.has_path(G, valve_node, pump_node):
path = nx.shortest_path(G, valve_node, pump_node)
if len(path) - 1 <= 2: # 最多允许2跳
path_length = len(path) - 1
debug_print(f" - 到泵 {pump_node} 的路径: {path}, 距离: {path_length}")
if path_length <= 2: # 最多允许2跳
debug_print(f" ✅ 通过路径找到泵: {pump_node}")
return pump_node
except nx.NetworkXNoPath:
continue
# 最终失败
debug_print(f" ❌ 完全找不到泵节点")
raise ValueError(f"未找到与阀 {valve_node} 相连的泵节点")
def build_pump_valve_maps(G, pump_backbone):
"""
构建泵-阀门映射
过滤掉电磁阀,只处理需要泵的多通阀
构建泵-阀门映射 - 修复版本
🔧 修复:过滤掉电磁阀,只处理需要泵的多通阀
"""
pumps_from_node = {}
valve_from_node = {}
# 过滤掉电磁阀
debug_print(f"🔧 构建泵-阀门映射,原始骨架: {pump_backbone}")
# 🔧 关键修复:过滤掉电磁阀
filtered_backbone = []
for node in pump_backbone:
node_data = G.nodes.get(node, {})
node_class = node_data.get("class", "") or ""
# 跳过电磁阀
if ("solenoid" in node_class.lower() or "solenoid_valve" in node.lower()):
debug_print(f" - 跳过电磁阀: {node}")
continue
filtered_backbone.append(node)
debug_print(f"🔧 过滤后的骨架: {filtered_backbone}")
for node in filtered_backbone:
node_data = G.nodes.get(node, {})
node_class = node_data.get("class", "") or ""
if is_integrated_pump(node_class, node):
if is_integrated_pump(G.nodes[node]["class"]):
pumps_from_node[node] = node
valve_from_node[node] = node
debug_print(f" - 集成泵-阀: {node}")
else:
try:
pump_node = find_connected_pump(G, node)
pumps_from_node[node] = pump_node
valve_from_node[node] = node
except ValueError:
debug_print(f" - 阀门 {node} -> 泵 {pump_node}")
except ValueError as e:
debug_print(f" - 跳过节点 {node}: {str(e)}")
continue
debug_print(f"泵-阀映射: pumps={pumps_from_node}, valves={valve_from_node}")
debug_print(f"🔧 最终映射: pumps={pumps_from_node}, valves={valve_from_node}")
return pumps_from_node, valve_from_node
@@ -124,8 +213,8 @@ def generate_pump_protocol(
transfer_flowrate: float = 0.5,
) -> List[Dict[str, Any]]:
"""
生成泵操作的动作序列
正确处理包含电磁阀的路径
生成泵操作的动作序列 - 修复版本
🔧 修复:正确处理包含电磁阀的路径
"""
pump_action_sequence = []
nodes = G.nodes(data=True)
@@ -144,6 +233,7 @@ def generate_pump_protocol(
logger.warning(f"transfer_flowrate <= 0使用默认值 {transfer_flowrate}mL/s")
# 验证容器存在
debug_print(f"🔍 验证源容器 '{from_vessel_id}' 和目标容器 '{to_vessel_id}' 是否存在...")
if from_vessel_id not in G.nodes():
logger.error(f"源容器 '{from_vessel_id}' 不存在")
return pump_action_sequence
@@ -159,24 +249,28 @@ def generate_pump_protocol(
logger.error(f"无法找到从 '{from_vessel_id}''{to_vessel_id}' 的路径")
return pump_action_sequence
# 正确构建泵骨架,排除容器和电磁阀
# 🔧 关键修复:正确构建泵骨架,排除容器和电磁阀
pump_backbone = []
for node in shortest_path:
# 跳过起始和结束容器
if node == from_vessel_id or node == to_vessel_id:
continue
# 跳过电磁阀(电磁阀不参与泵操作)
node_data = G.nodes.get(node, {})
node_class = node_data.get("class", "") or ""
if ("solenoid" in node_class.lower() or "solenoid_valve" in node.lower()):
debug_print(f"PUMP_TRANSFER: 跳过电磁阀 {node}")
continue
# 只包含多通阀和泵
if ("multiway" in node_class.lower() or "valve" in node_class.lower() or "pump" in node_class.lower()):
pump_backbone.append(node)
debug_print(f"PUMP_TRANSFER: 泵骨架: {pump_backbone}")
debug_print(f"PUMP_TRANSFER: 过滤后的泵骨架: {pump_backbone}")
if not pump_backbone:
debug_print("PUMP_TRANSFER: 没有泵骨架节点")
debug_print("PUMP_TRANSFER: 没有泵骨架节点,可能是直接容器连接或只有电磁阀")
return pump_action_sequence
if transfer_flowrate == 0:
@@ -192,7 +286,7 @@ def generate_pump_protocol(
debug_print("PUMP_TRANSFER: 没有可用的泵映射")
return pump_action_sequence
# 安全地获取最小转移体积
# 🔧 修复:安全地获取最小转移体积
try:
min_transfer_volumes = []
for node in pump_backbone:
@@ -222,19 +316,19 @@ def generate_pump_protocol(
volume_left = volume
debug_print(f"PUMP_TRANSFER: 需要 {repeats} 次转移,单次最大体积 {min_transfer_volume} mL")
# 只在开头打印总体概览
# 🆕 只在开头打印总体概览
if repeats > 1:
debug_print(f"分批转移: 总体积 {volume:.2f}mL, {repeats}, 单次最大 {min_transfer_volume} mL")
logger.info(f"分批转移: 总体积 {volume:.2f}mL, {repeats} 次转移")
debug_print(f"🔄 分批转移概览: 总体积 {volume:.2f}mL,需要 {repeats}转移")
logger.info(f"🔄 分批转移概览: 总体积 {volume:.2f}mL,需要 {repeats} 次转移")
# 创建一个自定义的wait动作用于在执行时打印日志
# 🔧 创建一个自定义的wait动作用于在执行时打印日志
def create_progress_log_action(message: str) -> Dict[str, Any]:
"""创建一个特殊的等待动作,在执行时打印进度日志"""
return {
"action_name": "wait",
"action_kwargs": {
"time": 0.1,
"progress_message": message
"time": 0.1, # 很短的等待时间
"progress_message": message # 自定义字段,用于进度日志
}
}
@@ -242,12 +336,12 @@ def generate_pump_protocol(
for i in range(repeats):
current_volume = min(volume_left, min_transfer_volume)
# 🆕 在每次循环开始时添加进度日志
if repeats > 1:
pump_action_sequence.append(create_progress_log_action(
f"{i + 1}/{repeats} 次转移: {current_volume:.2f}mL ({from_vessel_id} -> {to_vessel_id})"
))
start_message = f"🚀 准备开始第 {i + 1}/{repeats} 次转移: {current_volume:.2f}mL ({from_vessel_id}{to_vessel_id}) 🚰"
pump_action_sequence.append(create_progress_log_action(start_message))
# 安全地获取边数据
# 🔧 修复:安全地获取边数据
def get_safe_edge_data(node_a, node_b, key):
try:
edge_data = G.get_edge_data(node_a, node_b)
@@ -350,13 +444,13 @@ def generate_pump_protocol(
])
pump_action_sequence.append({"action_name": "wait", "action_kwargs": {"time": 3}})
# 在每次循环结束时添加完成日志
# 🆕 在每次循环结束时添加完成日志
if repeats > 1:
remaining_volume = volume_left - current_volume
if remaining_volume > 0:
end_message = f"{i + 1}/{repeats} 次完成, 剩余 {remaining_volume:.2f}mL"
end_message = f"{i + 1}/{repeats}转移完成! 剩余 {remaining_volume:.2f}mL 待转移 ⏳"
else:
end_message = f"{i + 1}/{repeats} 次完成, 全部 {volume:.2f}mL 转移完毕"
end_message = f"🎉 {i + 1}/{repeats}转移完成! 全部 {volume:.2f}mL 转移完毕"
pump_action_sequence.append(create_progress_log_action(end_message))
@@ -398,205 +492,300 @@ def generate_pump_protocol_with_rinsing(
to_vessel_id, _ = get_vessel(to_vessel)
with generate_pump_protocol_with_rinsing._lock:
debug_print(f"PUMP_TRANSFER: {from_vessel_id} -> {to_vessel_id}, volume={volume}, flowrate={flowrate}")
debug_print("=" * 60)
debug_print(f"PUMP_TRANSFER: 🚀 开始生成协议 (同步版本)")
debug_print(f" 📍 路径: {from_vessel_id} -> {to_vessel_id}")
debug_print(f" 🕐 时间戳: {time_module.time()}")
debug_print(f" 🔒 获得执行锁")
debug_print("=" * 60)
# 短暂延迟,避免快速重复调用
time_module.sleep(0.01)
debug_print("🔍 步骤1: 开始体积处理...")
# 1. 处理体积参数
final_volume = volume
debug_print(f"📋 初始设置: final_volume = {final_volume}")
# 如果volume为0从容器读取实际体积
# 🔧 修复:如果volume为0ROS2传入的空值,从容器读取实际体积
if volume == 0.0:
debug_print("🎯 检测到 volume=0.0,开始自动体积检测...")
actual_volume = get_resource_liquid_volume(G.nodes.get(from_vessel_id, {}))
# 直接从源容器读取实际体积
actual_volume = get_vessel_liquid_volume(G, from_vessel_id)
debug_print(f"📖 从容器 '{from_vessel_id}' 读取到体积: {actual_volume}mL")
if actual_volume > 0:
final_volume = actual_volume
debug_print(f"✅ 成功设置体积为: {final_volume}mL")
else:
final_volume = 10.0
logger.warning(f"无法从容器读取体积,使用默认值: {final_volume}mL")
final_volume = 10.0 # 如果读取失败,使用默认值
logger.warning(f"⚠️ 无法从容器读取体积,使用默认值: {final_volume}mL")
else:
debug_print(f"📌 体积非零,直接使用: {final_volume}mL")
# 处理 amount 参数
if amount and amount.strip():
debug_print(f"🔍 检测到 amount 参数: '{amount}',开始解析...")
parsed_volume = _parse_amount_to_volume(amount)
debug_print(f"📖 从 amount 解析得到体积: {parsed_volume}mL")
if parsed_volume > 0:
final_volume = parsed_volume
debug_print(f"✅ 使用从 amount 解析的体积: {final_volume}mL")
elif parsed_volume == 0.0 and amount.lower().strip() == "all":
actual_volume = get_resource_liquid_volume(G.nodes.get(from_vessel_id, {}))
debug_print("🎯 检测到 amount='all',从容器读取全部体积...")
actual_volume = get_vessel_liquid_volume(G, from_vessel_id)
if actual_volume > 0:
final_volume = actual_volume
debug_print(f"✅ amount='all',设置体积为: {final_volume}mL")
# 最终体积验证
debug_print(f"🔍 步骤2: 最终体积验证...")
if final_volume <= 0:
logger.error(f"体积无效: {final_volume}mL")
logger.error(f"体积无效: {final_volume}mL")
final_volume = 10.0
logger.warning(f"强制设置为默认值: {final_volume}mL")
logger.warning(f"⚠️ 强制设置为默认值: {final_volume}mL")
debug_print(f"最终体积: {final_volume}mL")
debug_print(f"✅ 最终确定体积: {final_volume}mL")
# 2. 处理流速参数
debug_print(f"🔍 步骤3: 处理流速参数...")
debug_print(f" - 原始 flowrate: {flowrate}")
debug_print(f" - 原始 transfer_flowrate: {transfer_flowrate}")
final_flowrate = flowrate if flowrate > 0 else 2.5
final_transfer_flowrate = transfer_flowrate if transfer_flowrate > 0 else 0.5
if flowrate <= 0:
logger.warning(f"flowrate <= 0修正为: {final_flowrate}mL/s")
logger.warning(f"⚠️ flowrate <= 0修正为: {final_flowrate}mL/s")
if transfer_flowrate <= 0:
logger.warning(f"transfer_flowrate <= 0修正为: {final_transfer_flowrate}mL/s")
logger.warning(f"⚠️ transfer_flowrate <= 0修正为: {final_transfer_flowrate}mL/s")
debug_print(f"✅ 修正后流速: flowrate={final_flowrate}mL/s, transfer_flowrate={final_transfer_flowrate}mL/s")
# 3. 根据时间计算流速
if time > 0 and final_volume > 0:
debug_print(f"🔍 步骤4: 根据时间计算流速...")
calculated_flowrate = final_volume / time
debug_print(f" - 计算得到流速: {calculated_flowrate}mL/s")
if flowrate <= 0 or flowrate == 2.5:
final_flowrate = min(calculated_flowrate, 10.0)
debug_print(f" - 调整 flowrate 为: {final_flowrate}mL/s")
if transfer_flowrate <= 0 or transfer_flowrate == 0.5:
final_transfer_flowrate = min(calculated_flowrate, 5.0)
debug_print(f" - 调整 transfer_flowrate 为: {final_transfer_flowrate}mL/s")
# 4. 根据速度规格调整
if rate_spec:
debug_print(f"🔍 步骤5: 根据速度规格调整...")
debug_print(f" - 速度规格: '{rate_spec}'")
if rate_spec == "dropwise":
final_flowrate = min(final_flowrate, 0.1)
final_transfer_flowrate = min(final_transfer_flowrate, 0.1)
debug_print(f" - dropwise模式流速调整为: {final_flowrate}mL/s")
elif rate_spec == "slowly":
final_flowrate = min(final_flowrate, 0.5)
final_transfer_flowrate = min(final_transfer_flowrate, 0.3)
debug_print(f" - slowly模式流速调整为: {final_flowrate}mL/s")
elif rate_spec == "quickly":
final_flowrate = max(final_flowrate, 5.0)
final_transfer_flowrate = max(final_transfer_flowrate, 2.0)
debug_print(f"速度规格 '{rate_spec}': flowrate={final_flowrate}, transfer={final_transfer_flowrate}")
debug_print(f" - quickly模式流速调整为: {final_flowrate}mL/s")
# 5. 处理冲洗参数
debug_print(f"🔍 步骤6: 处理冲洗参数...")
final_rinsing_solvent = rinsing_solvent
final_rinsing_volume = rinsing_volume if rinsing_volume > 0 else 5.0
final_rinsing_repeats = rinsing_repeats if rinsing_repeats > 0 else 2
if rinsing_volume <= 0:
logger.warning(f"rinsing_volume <= 0修正为: {final_rinsing_volume}mL")
logger.warning(f"⚠️ rinsing_volume <= 0修正为: {final_rinsing_volume}mL")
if rinsing_repeats <= 0:
logger.warning(f"rinsing_repeats <= 0修正为: {final_rinsing_repeats}")
logger.warning(f"⚠️ rinsing_repeats <= 0修正为: {final_rinsing_repeats}")
# 根据物理属性调整冲洗参数
if viscous or solid:
final_rinsing_repeats = max(final_rinsing_repeats, 3)
final_rinsing_volume = max(final_rinsing_volume, 10.0)
debug_print(f"🧪 粘稠/固体物质,调整冲洗参数:{final_rinsing_repeats}次,{final_rinsing_volume}mL")
# 参数总结
debug_print(f"最终参数: volume={final_volume}mL, flowrate={final_flowrate}mL/s, "
f"transfer_flowrate={final_transfer_flowrate}mL/s, "
f"rinsing={final_rinsing_solvent}/{final_rinsing_volume}mL/{final_rinsing_repeats}")
debug_print("📊 最终参数总结:")
debug_print(f" - 体积: {final_volume}mL")
debug_print(f" - 流速: {final_flowrate}mL/s")
debug_print(f" - 转移流速: {final_transfer_flowrate}mL/s")
debug_print(f" - 冲洗溶剂: '{final_rinsing_solvent}'")
debug_print(f" - 冲洗体积: {final_rinsing_volume}mL")
debug_print(f" - 冲洗次数: {final_rinsing_repeats}")
# ========== 执行基础转移 ==========
debug_print("🔧 步骤7: 开始执行基础转移...")
# 执行基础转移
try:
debug_print(f" - 调用 generate_pump_protocol...")
debug_print(
f" - 参数: G, '{from_vessel_id}', '{to_vessel_id}', {final_volume}, {final_flowrate}, {final_transfer_flowrate}")
pump_action_sequence = generate_pump_protocol(
G, from_vessel_id, to_vessel_id, final_volume,
final_flowrate, final_transfer_flowrate
)
debug_print(f"基础转移生成了 {len(pump_action_sequence)} 个动作")
debug_print(f" - generate_pump_protocol 返回结果:")
debug_print(f" - 动作序列长度: {len(pump_action_sequence)}")
debug_print(f" - 动作序列是否为空: {len(pump_action_sequence) == 0}")
if not pump_action_sequence:
debug_print("基础转移协议为空")
debug_print("基础转移协议生成为空,可能是路径问题")
debug_print(f" - 源容器存在: {from_vessel_id in G.nodes()}")
debug_print(f" - 目标容器存在: {to_vessel_id in G.nodes()}")
if from_vessel_id in G.nodes() and to_vessel_id in G.nodes():
try:
path = nx.shortest_path(G, source=from_vessel_id, target=to_vessel_id)
debug_print(f"路径存在: {path}")
except Exception:
pass
debug_print(f" - 路径存在: {path}")
except Exception as path_error:
debug_print(f" - 无法找到路径: {str(path_error)}")
return [
{
"device_id": "system",
"action_name": "log_message",
"action_kwargs": {
"message": f"路径问题,无法转移: {final_volume}mL 从 {from_vessel_id}{to_vessel_id}"
"message": f"⚠️ 路径问题,无法转移: {final_volume}mL 从 {from_vessel_id}{to_vessel_id}"
}
}
]
debug_print(f"✅ 基础转移生成了 {len(pump_action_sequence)} 个动作")
# 打印前几个动作用于调试
if len(pump_action_sequence) > 0:
debug_print("🔍 前几个动作预览:")
for i, action in enumerate(pump_action_sequence[:3]):
debug_print(f" 动作 {i + 1}: {action}")
if len(pump_action_sequence) > 3:
debug_print(f" ... 还有 {len(pump_action_sequence) - 3} 个动作")
except Exception as e:
debug_print(f"基础转移失败: {str(e)}\n{traceback.format_exc()}")
debug_print(f"基础转移失败: {str(e)}")
import traceback
debug_print(f"详细错误: {traceback.format_exc()}")
return [
{
"device_id": "system",
"action_name": "log_message",
"action_kwargs": {
"message": f"转移失败: {final_volume}mL 从 {from_vessel_id}{to_vessel_id}, 错误: {str(e)}"
"message": f"转移失败: {final_volume}mL 从 {from_vessel_id}{to_vessel_id}, 错误: {str(e)}"
}
}
]
# 执行冲洗操作
# ========== 执行冲洗操作 ==========
debug_print("🔧 步骤8: 检查冲洗操作...")
if final_rinsing_solvent and final_rinsing_solvent.strip() and final_rinsing_repeats > 0:
debug_print(f"🧽 开始冲洗操作,溶剂: '{final_rinsing_solvent}'")
try:
if final_rinsing_solvent.strip() != "air":
debug_print(" - 执行液体冲洗...")
rinsing_actions = _generate_rinsing_sequence(
G, from_vessel_id, to_vessel_id, final_rinsing_solvent,
final_rinsing_volume, final_rinsing_repeats,
final_flowrate, final_transfer_flowrate
)
pump_action_sequence.extend(rinsing_actions)
debug_print(f" - 添加了 {len(rinsing_actions)} 个冲洗动作")
else:
debug_print(" - 执行空气冲洗...")
air_rinsing_actions = _generate_air_rinsing_sequence(
G, from_vessel_id, to_vessel_id, final_rinsing_volume, final_rinsing_repeats,
final_flowrate, final_transfer_flowrate
)
pump_action_sequence.extend(air_rinsing_actions)
debug_print(f" - 添加了 {len(air_rinsing_actions)} 个空气冲洗动作")
except Exception as e:
debug_print(f"冲洗操作失败: {str(e)}")
debug_print(f"⚠️ 冲洗操作失败: {str(e)},跳过冲洗")
else:
debug_print(f"跳过冲洗 (solvent='{final_rinsing_solvent}', repeats={final_rinsing_repeats})")
debug_print(f"⏭️ 跳过冲洗操作")
debug_print(f" - 溶剂: '{final_rinsing_solvent}'")
debug_print(f" - 次数: {final_rinsing_repeats}")
debug_print(f" - 条件满足: {bool(final_rinsing_solvent and final_rinsing_solvent.strip() and final_rinsing_repeats > 0)}")
# 最终结果
debug_print(f"PUMP_TRANSFER 完成: {from_vessel_id} -> {to_vessel_id}, "
f"volume={final_volume}mL, 动作数={len(pump_action_sequence)}")
# ========== 最终结果 ==========
debug_print("=" * 60)
debug_print(f"🎉 PUMP_TRANSFER: 协议生成完成")
debug_print(f" 📊 总动作数: {len(pump_action_sequence)}")
debug_print(f" 📋 最终体积: {final_volume}mL")
debug_print(f" 🚀 执行路径: {from_vessel_id} -> {to_vessel_id}")
# 最终验证
if len(pump_action_sequence) == 0:
debug_print("🚨 协议生成结果为空!这是异常情况")
return [
{
"device_id": "system",
"action_name": "log_message",
"action_kwargs": {
"message": "协议生成失败: 无法生成任何动作序列"
"message": f"🚨 协议生成失败: 无法生成任何动作序列"
}
}
]
debug_print("=" * 60)
return pump_action_sequence
def _parse_amount_to_volume(amount: str) -> float:
"""解析 amount 字符串为体积"""
debug_print(f"🔍 解析 amount: '{amount}'")
if not amount:
debug_print(" - amount 为空,返回 0.0")
return 0.0
amount = amount.lower().strip()
debug_print(f" - 处理后的 amount: '{amount}'")
# 处理特殊关键词
if amount == "all":
debug_print(" - 检测到 'all',返回 0.0(需要后续处理)")
return 0.0 # 返回0.0,让调用者处理
# 提取数字
import re
numbers = re.findall(r'[\d.]+', amount)
debug_print(f" - 提取到的数字: {numbers}")
if numbers:
volume = float(numbers[0])
debug_print(f" - 基础体积: {volume}")
# 单位转换
if 'ml' in amount or 'milliliter' in amount:
debug_print(f" - 单位: mL最终体积: {volume}")
return volume
elif 'l' in amount and 'ml' not in amount:
return volume * 1000
final_volume = volume * 1000
debug_print(f" - 单位: L最终体积: {final_volume}mL")
return final_volume
elif 'μl' in amount or 'microliter' in amount:
return volume / 1000
final_volume = volume / 1000
debug_print(f" - 单位: μL最终体积: {final_volume}mL")
return final_volume
else:
return volume # 默认mL
debug_print(f" - 无单位,假设为 mL: {volume}")
return volume
debug_print(" - 无法解析,返回 0.0")
return 0.0

View File

@@ -4,64 +4,76 @@ import logging
from typing import List, Dict, Any, Tuple, Union
from .utils.vessel_parser import get_vessel, find_solvent_vessel
from .utils.unit_parser import parse_volume_input
from .utils.logger_util import debug_print
from .pump_protocol import generate_pump_protocol_with_rinsing
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[RECRYSTALLIZE] {message}")
def parse_ratio(ratio_str: str) -> Tuple[float, float]:
"""
解析比例字符串,支持多种格式
Args:
ratio_str: 比例字符串(如 "1:1", "3:7", "50:50"
Returns:
Tuple[float, float]: 比例元组 (ratio1, ratio2)
"""
debug_print(f"⚖️ 开始解析比例: '{ratio_str}' 📊")
try:
# 处理 "1:1", "3:7", "50:50" 等格式
if ":" in ratio_str:
parts = ratio_str.split(":")
if len(parts) == 2:
ratio1 = float(parts[0])
ratio2 = float(parts[1])
debug_print(f"✅ 冒号格式解析成功: {ratio1}:{ratio2} 🎯")
return ratio1, ratio2
# 处理 "1-1", "3-7" 等格式
if "-" in ratio_str:
parts = ratio_str.split("-")
if len(parts) == 2:
ratio1 = float(parts[0])
ratio2 = float(parts[1])
debug_print(f"✅ 横线格式解析成功: {ratio1}:{ratio2} 🎯")
return ratio1, ratio2
# 处理 "1,1", "3,7" 等格式
if "," in ratio_str:
parts = ratio_str.split(",")
if len(parts) == 2:
ratio1 = float(parts[0])
ratio2 = float(parts[1])
debug_print(f"✅ 逗号格式解析成功: {ratio1}:{ratio2} 🎯")
return ratio1, ratio2
debug_print(f"无法解析比例 '{ratio_str}',使用默认比例 1:1")
# 默认 1:1
debug_print(f"⚠️ 无法解析比例 '{ratio_str}',使用默认比例 1:1 🎭")
return 1.0, 1.0
except ValueError:
debug_print(f"比例解析错误 '{ratio_str}',使用默认比例 1:1")
debug_print(f"比例解析错误 '{ratio_str}',使用默认比例 1:1 🎭")
return 1.0, 1.0
def generate_recrystallize_protocol(
G: nx.DiGraph,
vessel: dict,
vessel: dict, # 🔧 修改:从字符串改为字典类型
ratio: str,
solvent1: str,
solvent2: str,
volume: Union[str, float],
volume: Union[str, float], # 支持字符串和数值
**kwargs
) -> List[Dict[str, Any]]:
"""
生成重结晶协议序列 - 支持vessel字典和体积运算
Args:
G: 有向图,节点为容器和设备
vessel: 目标容器字典从XDL传入
@@ -70,18 +82,28 @@ def generate_recrystallize_protocol(
solvent2: 第二种溶剂名称
volume: 总体积(支持 "100 mL", "50", "2.5 L" 等)
**kwargs: 其他可选参数
Returns:
List[Dict[str, Any]]: 动作序列
"""
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
action_sequence = []
debug_print(f"开始生成重结晶协议: vessel={vessel_id}, ratio={ratio}, solvent1={solvent1}, solvent2={solvent2}, volume={volume}")
# 记录重结晶前的容器状态
debug_print("💎" * 20)
debug_print("🚀 开始生成重结晶协议支持vessel字典和体积运算")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" ⚖️ 比例: {ratio}")
debug_print(f" 🧪 溶剂1: {solvent1}")
debug_print(f" 🧪 溶剂2: {solvent2}")
debug_print(f" 💧 总体积: {volume} (类型: {type(volume)})")
debug_print("💎" * 20)
# 🔧 新增:记录重结晶前的容器状态
debug_print("🔍 记录重结晶前容器状态...")
original_liquid_volume = 0.0
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
@@ -89,73 +111,102 @@ def generate_recrystallize_protocol(
original_liquid_volume = current_volume[0]
elif isinstance(current_volume, (int, float)):
original_liquid_volume = current_volume
debug_print(f"📊 重结晶前液体体积: {original_liquid_volume:.2f}mL")
# 1. 验证目标容器存在
if vessel_id not in G.nodes():
debug_print("📍 步骤1: 验证目标容器... 🔧")
if vessel_id not in G.nodes(): # 🔧 使用 vessel_id
debug_print(f"❌ 目标容器 '{vessel_id}' 不存在于系统中! 😱")
raise ValueError(f"目标容器 '{vessel_id}' 不存在于系统中")
debug_print(f"✅ 目标容器 '{vessel_id}' 验证通过 🎯")
# 2. 解析体积(支持单位)
debug_print("📍 步骤2: 解析体积(支持单位)... 💧")
final_volume = parse_volume_input(volume, "mL")
debug_print(f"体积解析: {volume} -> {final_volume}mL")
debug_print(f"🎯 体积解析完成: {volume} {final_volume}mL")
# 3. 解析比例
debug_print("📍 步骤3: 解析比例... ⚖️")
ratio1, ratio2 = parse_ratio(ratio)
total_ratio = ratio1 + ratio2
debug_print(f"🎯 比例解析完成: {ratio1}:{ratio2} (总比例: {total_ratio}) ✨")
# 4. 计算各溶剂体积
debug_print("📍 步骤4: 计算各溶剂体积... 🧮")
volume1 = final_volume * (ratio1 / total_ratio)
volume2 = final_volume * (ratio2 / total_ratio)
debug_print(f"溶剂体积: {solvent1}={volume1:.2f}mL, {solvent2}={volume2:.2f}mL")
debug_print(f"🧪 {solvent1} 体积: {volume1:.2f} mL ({ratio1}/{total_ratio} × {final_volume})")
debug_print(f"🧪 {solvent2} 体积: {volume2:.2f} mL ({ratio2}/{total_ratio} × {final_volume})")
debug_print(f"✅ 体积计算完成: 总计 {volume1 + volume2:.2f} mL 🎯")
# 5. 查找溶剂容器
debug_print("📍 步骤5: 查找溶剂容器... 🔍")
try:
debug_print(f" 🔍 查找溶剂1容器...")
solvent1_vessel = find_solvent_vessel(G, solvent1)
debug_print(f" 🎉 找到溶剂1容器: {solvent1_vessel}")
except ValueError as e:
debug_print(f" ❌ 溶剂1容器查找失败: {str(e)} 😭")
raise ValueError(f"无法找到溶剂1 '{solvent1}': {str(e)}")
try:
debug_print(f" 🔍 查找溶剂2容器...")
solvent2_vessel = find_solvent_vessel(G, solvent2)
debug_print(f" 🎉 找到溶剂2容器: {solvent2_vessel}")
except ValueError as e:
debug_print(f" ❌ 溶剂2容器查找失败: {str(e)} 😭")
raise ValueError(f"无法找到溶剂2 '{solvent2}': {str(e)}")
# 6. 验证路径存在
debug_print("📍 步骤6: 验证传输路径... 🛤️")
try:
path1 = nx.shortest_path(G, source=solvent1_vessel, target=vessel_id)
path1 = nx.shortest_path(G, source=solvent1_vessel, target=vessel_id) # 🔧 使用 vessel_id
debug_print(f" 🛤️ 溶剂1路径: {''.join(path1)}")
except nx.NetworkXNoPath:
debug_print(f" ❌ 溶剂1路径不可达: {solvent1_vessel}{vessel_id} 😞")
raise ValueError(f"从溶剂1容器 '{solvent1_vessel}' 到目标容器 '{vessel_id}' 没有可用路径")
try:
path2 = nx.shortest_path(G, source=solvent2_vessel, target=vessel_id)
path2 = nx.shortest_path(G, source=solvent2_vessel, target=vessel_id) # 🔧 使用 vessel_id
debug_print(f" 🛤️ 溶剂2路径: {''.join(path2)}")
except nx.NetworkXNoPath:
debug_print(f" ❌ 溶剂2路径不可达: {solvent2_vessel}{vessel_id} 😞")
raise ValueError(f"从溶剂2容器 '{solvent2_vessel}' 到目标容器 '{vessel_id}' 没有可用路径")
# 7. 添加第一种溶剂
debug_print("📍 步骤7: 添加第一种溶剂... 🧪")
debug_print(f" 🚰 开始添加溶剂1: {solvent1} ({volume1:.2f} mL)")
try:
pump_actions1 = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=solvent1_vessel,
to_vessel=vessel_id,
volume=volume1,
to_vessel=vessel_id, # 🔧 使用 vessel_id
volume=volume1, # 使用解析后的体积
amount="",
time=0.0,
viscous=False,
rinsing_solvent="",
rinsing_solvent="", # 重结晶不需要清洗
rinsing_volume=0.0,
rinsing_repeats=0,
solid=False,
flowrate=2.0,
flowrate=2.0, # 正常流速
transfer_flowrate=0.5
)
action_sequence.extend(pump_actions1)
debug_print(f" ✅ 溶剂1泵送动作已添加: {len(pump_actions1)} 个动作 🚰✨")
except Exception as e:
debug_print(f" ❌ 溶剂1泵协议生成失败: {str(e)} 😭")
raise ValueError(f"生成溶剂1泵协议时出错: {str(e)}")
# 更新容器体积 - 添加溶剂1后
# 🔧 新增:更新容器体积 - 添加溶剂1后
debug_print(" 🔧 更新容器体积 - 添加溶剂1后...")
new_volume_after_solvent1 = original_liquid_volume + volume1
# 更新vessel字典中的体积
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
@@ -165,14 +216,15 @@ def generate_recrystallize_protocol(
vessel["data"]["liquid_volume"] = [new_volume_after_solvent1]
else:
vessel["data"]["liquid_volume"] = new_volume_after_solvent1
# 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = new_volume_after_solvent1
@@ -180,42 +232,53 @@ def generate_recrystallize_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [new_volume_after_solvent1]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = new_volume_after_solvent1
debug_print(f" 📊 体积更新: {original_liquid_volume:.2f}mL + {volume1:.2f}mL = {new_volume_after_solvent1:.2f}mL")
# 8. 等待溶剂1稳定
debug_print(" ⏳ 添加溶剂1稳定等待...")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
"time": 5.0,
"time": 5.0, # 缩短等待时间
"description": f"等待溶剂1 {solvent1} 稳定"
}
})
debug_print(" ✅ 溶剂1稳定等待已添加 ⏰✨")
# 9. 添加第二种溶剂
debug_print("📍 步骤8: 添加第二种溶剂... 🧪")
debug_print(f" 🚰 开始添加溶剂2: {solvent2} ({volume2:.2f} mL)")
try:
pump_actions2 = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=solvent2_vessel,
to_vessel=vessel_id,
volume=volume2,
to_vessel=vessel_id, # 🔧 使用 vessel_id
volume=volume2, # 使用解析后的体积
amount="",
time=0.0,
viscous=False,
rinsing_solvent="",
rinsing_solvent="", # 重结晶不需要清洗
rinsing_volume=0.0,
rinsing_repeats=0,
solid=False,
flowrate=2.0,
flowrate=2.0, # 正常流速
transfer_flowrate=0.5
)
action_sequence.extend(pump_actions2)
debug_print(f" ✅ 溶剂2泵送动作已添加: {len(pump_actions2)} 个动作 🚰✨")
except Exception as e:
debug_print(f" ❌ 溶剂2泵协议生成失败: {str(e)} 😭")
raise ValueError(f"生成溶剂2泵协议时出错: {str(e)}")
# 更新容器体积 - 添加溶剂2后
# 🔧 新增:更新容器体积 - 添加溶剂2后
debug_print(" 🔧 更新容器体积 - 添加溶剂2后...")
final_liquid_volume = new_volume_after_solvent1 + volume2
# 更新vessel字典中的体积
if "data" in vessel and "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
@@ -225,14 +288,15 @@ def generate_recrystallize_protocol(
vessel["data"]["liquid_volume"] = [final_liquid_volume]
else:
vessel["data"]["liquid_volume"] = final_liquid_volume
# 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = final_liquid_volume
@@ -240,25 +304,36 @@ def generate_recrystallize_protocol(
G.nodes[vessel_id]['data']['liquid_volume'] = [final_liquid_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = final_liquid_volume
debug_print(f" 📊 最终体积: {new_volume_after_solvent1:.2f}mL + {volume2:.2f}mL = {final_liquid_volume:.2f}mL")
# 10. 等待溶剂2稳定
debug_print(" ⏳ 添加溶剂2稳定等待...")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
"time": 5.0,
"time": 5.0, # 缩短等待时间
"description": f"等待溶剂2 {solvent2} 稳定"
}
})
debug_print(" ✅ 溶剂2稳定等待已添加 ⏰✨")
# 11. 等待重结晶完成
original_crystallize_time = 600.0
simulation_time_limit = 60.0
debug_print("📍 步骤9: 等待重结晶完成... 💎")
# 模拟运行时间优化
debug_print(" ⏱️ 检查模拟运行时间限制...")
original_crystallize_time = 600.0 # 原始重结晶时间
simulation_time_limit = 60.0 # 模拟运行时间限制60秒
final_crystallize_time = min(original_crystallize_time, simulation_time_limit)
if original_crystallize_time > simulation_time_limit:
debug_print(f"模拟运行优化: {original_crystallize_time}s -> {final_crystallize_time}s")
debug_print(f" 🎮 模拟运行优化: {original_crystallize_time}s {final_crystallize_time}s")
debug_print(f" 📊 时间缩短: {original_crystallize_time/60:.1f}分钟 → {final_crystallize_time/60:.1f}分钟 🚀")
else:
debug_print(f" ✅ 时间在限制内: {final_crystallize_time}s 保持不变 🎯")
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
@@ -266,28 +341,50 @@ def generate_recrystallize_protocol(
"description": f"等待重结晶完成({solvent1}:{solvent2} = {ratio},总体积 {final_volume}mL" + (f" (模拟时间)" if original_crystallize_time != final_crystallize_time else "")
}
})
debug_print(f"重结晶协议生成完成: {len(action_sequence)} 个动作, 容器={vessel_id}, 体积变化: {original_liquid_volume:.2f} -> {final_liquid_volume:.2f}mL")
debug_print(f" ✅ 重结晶等待已添加: {final_crystallize_time}s 💎✨")
# 显示时间调整信息
if original_crystallize_time != final_crystallize_time:
debug_print(f" 🎭 模拟优化说明: 原计划 {original_crystallize_time/60:.1f}分钟,实际模拟 {final_crystallize_time/60:.1f}分钟 ⚡")
# 总结
debug_print("💎" * 20)
debug_print(f"🎉 重结晶协议生成完成! ✨")
debug_print(f"📊 总动作数: {len(action_sequence)}")
debug_print(f"🥽 目标容器: {vessel_id}")
debug_print(f"💧 总体积变化:")
debug_print(f" - 原始体积: {original_liquid_volume:.2f}mL")
debug_print(f" - 添加溶剂: {final_volume:.2f}mL")
debug_print(f" - 最终体积: {final_liquid_volume:.2f}mL")
debug_print(f"⚖️ 溶剂比例: {solvent1}:{solvent2} = {ratio1}:{ratio2}")
debug_print(f"🧪 溶剂1: {solvent1} ({volume1:.2f}mL)")
debug_print(f"🧪 溶剂2: {solvent2} ({volume2:.2f}mL)")
debug_print(f"⏱️ 预计总时间: {(final_crystallize_time + 10)/60:.1f} 分钟 ⌛")
debug_print("💎" * 20)
return action_sequence
# 测试函数
def test_recrystallize_protocol():
"""测试重结晶协议"""
debug_print("=== RECRYSTALLIZE PROTOCOL 测试 ===")
debug_print("🧪 === RECRYSTALLIZE PROTOCOL 测试 ===")
# 测试体积解析
debug_print("💧 测试体积解析...")
test_volumes = ["100 mL", "2.5 L", "500", "50.5", "?", "invalid"]
for vol in test_volumes:
parsed = parse_volume_input(vol)
debug_print(f"体积 '{vol}' -> {parsed}mL")
debug_print(f" 📊 体积 '{vol}' -> {parsed}mL")
# 测试比例解析
debug_print("⚖️ 测试比例解析...")
test_ratios = ["1:1", "3:7", "50:50", "1-1", "2,8", "invalid"]
for ratio in test_ratios:
r1, r2 = parse_ratio(ratio)
debug_print(f"比例 '{ratio}' -> {r1}:{r2}")
debug_print("测试完成")
debug_print(f" 📊 比例 '{ratio}' -> {r1}:{r2}")
debug_print("测试完成 🎉")
if __name__ == "__main__":
test_recrystallize_protocol()
test_recrystallize_protocol()

View File

@@ -1,87 +1,253 @@
import networkx as nx
import logging
import sys
from typing import List, Dict, Any, Optional
from .utils.logger_util import debug_print, action_log
from .utils.vessel_parser import find_solvent_vessel
from .pump_protocol import generate_pump_protocol_with_rinsing
# 设置日志
logger = logging.getLogger(__name__)
create_action_log = action_log
# 确保输出编码为UTF-8
if hasattr(sys.stdout, 'reconfigure'):
try:
sys.stdout.reconfigure(encoding='utf-8')
sys.stderr.reconfigure(encoding='utf-8')
except:
pass
def debug_print(message):
"""调试输出函数 - 支持中文"""
try:
# 确保消息是字符串格式
safe_message = str(message)
print(f"[重置处理] {safe_message}", flush=True)
logger.info(f"[重置处理] {safe_message}")
except UnicodeEncodeError:
# 如果编码失败,尝试替换不支持的字符
safe_message = str(message).encode('utf-8', errors='replace').decode('utf-8')
print(f"[重置处理] {safe_message}", flush=True)
logger.info(f"[重置处理] {safe_message}")
except Exception as e:
# 最后的安全措施
fallback_message = f"日志输出错误: {repr(message)}"
print(f"[重置处理] {fallback_message}", flush=True)
logger.info(f"[重置处理] {fallback_message}")
def create_action_log(message: str, emoji: str = "📝") -> Dict[str, Any]:
"""创建一个动作日志 - 支持中文和emoji"""
try:
full_message = f"{emoji} {message}"
debug_print(full_message)
logger.info(full_message)
return {
"action_name": "wait",
"action_kwargs": {
"time": 0.1,
"log_message": full_message,
"progress_message": full_message
}
}
except Exception as e:
# 如果emoji有问题使用纯文本
safe_message = f"[日志] {message}"
debug_print(safe_message)
logger.info(safe_message)
return {
"action_name": "wait",
"action_kwargs": {
"time": 0.1,
"log_message": safe_message,
"progress_message": safe_message
}
}
def find_solvent_vessel(G: nx.DiGraph, solvent: str) -> str:
"""
查找溶剂容器,支持多种匹配模式
Args:
G: 网络图
solvent: 溶剂名称(如 "methanol", "ethanol", "water"
Returns:
str: 溶剂容器ID
"""
debug_print(f"🔍 正在查找溶剂 '{solvent}' 的容器...")
# 构建可能的容器名称
possible_names = [
f"flask_{solvent}", # flask_methanol
f"bottle_{solvent}", # bottle_methanol
f"reagent_{solvent}", # reagent_methanol
f"reagent_bottle_{solvent}", # reagent_bottle_methanol
f"{solvent}_flask", # methanol_flask
f"{solvent}_bottle", # methanol_bottle
f"{solvent}", # methanol
f"vessel_{solvent}", # vessel_methanol
]
debug_print(f"🎯 候选容器名称: {possible_names[:3]}... (共{len(possible_names)}个)")
# 第一步:通过容器名称匹配
debug_print("📋 方法1: 精确名称匹配...")
for vessel_name in possible_names:
if vessel_name in G.nodes():
debug_print(f"✅ 通过名称匹配找到容器: {vessel_name}")
return vessel_name
debug_print("⚠️ 精确名称匹配失败,尝试模糊匹配...")
# 第二步:通过模糊匹配
debug_print("📋 方法2: 模糊名称匹配...")
for node_id in G.nodes():
if G.nodes[node_id].get('type') == 'container':
node_name = G.nodes[node_id].get('name', '').lower()
# 检查是否包含溶剂名称
if solvent.lower() in node_id.lower() or solvent.lower() in node_name:
debug_print(f"✅ 通过模糊匹配找到容器: {node_id}")
return node_id
debug_print("⚠️ 模糊匹配失败,尝试液体类型匹配...")
# 第三步:通过液体类型匹配
debug_print("📋 方法3: 液体类型匹配...")
for node_id in G.nodes():
if G.nodes[node_id].get('type') == 'container':
vessel_data = G.nodes[node_id].get('data', {})
liquids = vessel_data.get('liquid', [])
for liquid in liquids:
if isinstance(liquid, dict):
liquid_type = (liquid.get('liquid_type') or liquid.get('name', '')).lower()
reagent_name = vessel_data.get('reagent_name', '').lower()
if solvent.lower() in liquid_type or solvent.lower() in reagent_name:
debug_print(f"✅ 通过液体类型匹配找到容器: {node_id}")
return node_id
# 列出可用容器帮助调试
debug_print("📊 显示可用容器信息...")
available_containers = []
for node_id in G.nodes():
if G.nodes[node_id].get('type') == 'container':
vessel_data = G.nodes[node_id].get('data', {})
liquids = vessel_data.get('liquid', [])
liquid_types = [liquid.get('liquid_type', '') or liquid.get('name', '')
for liquid in liquids if isinstance(liquid, dict)]
available_containers.append({
'id': node_id,
'name': G.nodes[node_id].get('name', ''),
'liquids': liquid_types,
'reagent_name': vessel_data.get('reagent_name', '')
})
debug_print(f"📋 可用容器列表 (共{len(available_containers)}个):")
for i, container in enumerate(available_containers[:5]): # 只显示前5个
debug_print(f" {i+1}. 🥽 {container['id']}: {container['name']}")
debug_print(f" 💧 液体: {container['liquids']}")
debug_print(f" 🧪 试剂: {container['reagent_name']}")
if len(available_containers) > 5:
debug_print(f" ... 还有 {len(available_containers)-5} 个容器")
debug_print(f"❌ 找不到溶剂 '{solvent}' 对应的容器")
raise ValueError(f"找不到溶剂 '{solvent}' 对应的容器。尝试了: {possible_names[:3]}...")
def generate_reset_handling_protocol(
G: nx.DiGraph,
solvent: str,
vessel: Optional[str] = None,
**kwargs
vessel: Optional[str] = None, # 🆕 新增可选vessel参数
**kwargs # 接收其他可能的参数但不使用
) -> List[Dict[str, Any]]:
"""
生成重置处理协议序列 - 支持自定义容器
Args:
G: 有向图,节点为容器和设备
solvent: 溶剂名称从XDL传入
vessel: 目标容器名称(可选,默认为 "main_reactor"
**kwargs: 其他可选参数,但不使用
Returns:
List[Dict[str, Any]]: 动作序列
"""
action_sequence = []
# 🔧 修改支持自定义vessel参数
target_vessel = vessel if vessel is not None else "main_reactor" # 默认目标容器
volume = 50.0 # 默认体积 50 mL
target_vessel = vessel if vessel is not None else "main_reactor"
volume = 50.0
debug_print(f"开始生成重置处理协议: solvent={solvent}, vessel={target_vessel}, volume={volume}mL")
debug_print("=" * 60)
debug_print("🚀 开始生成重置处理协议")
debug_print(f"📋 输入参数:")
debug_print(f" 🧪 溶剂: {solvent}")
debug_print(f" 🥽 目标容器: {target_vessel} {'(默认)' if vessel is None else '(指定)'}")
debug_print(f" 💧 体积: {volume} mL")
debug_print(f" ⚙️ 其他参数: {kwargs}")
debug_print("=" * 60)
# 添加初始日志
action_sequence.append(action_log(f"开始重置处理操作 - 容器: {target_vessel}", "🎬"))
action_sequence.append(action_log(f"使用溶剂: {solvent}", "🧪"))
action_sequence.append(action_log(f"重置体积: {volume}mL", "💧"))
action_sequence.append(create_action_log(f"开始重置处理操作 - 容器: {target_vessel}", "🎬"))
action_sequence.append(create_action_log(f"使用溶剂: {solvent}", "🧪"))
action_sequence.append(create_action_log(f"重置体积: {volume}mL", "💧"))
if vessel is None:
action_sequence.append(action_log("使用默认目标容器: main_reactor", "⚙️"))
action_sequence.append(create_action_log("使用默认目标容器: main_reactor", "⚙️"))
else:
action_sequence.append(action_log(f"使用指定目标容器: {vessel}", "🎯"))
action_sequence.append(create_action_log(f"使用指定目标容器: {vessel}", "🎯"))
# 1. 验证目标容器存在
action_sequence.append(action_log("正在验证目标容器...", "🔍"))
debug_print("🔍 步骤1: 验证目标容器...")
action_sequence.append(create_action_log("正在验证目标容器...", "🔍"))
if target_vessel not in G.nodes():
action_sequence.append(action_log(f"目标容器 '{target_vessel}' 不存在", ""))
debug_print(f"目标容器 '{target_vessel}' 不存在于系统中!")
action_sequence.append(create_action_log(f"目标容器 '{target_vessel}' 不存在", ""))
raise ValueError(f"目标容器 '{target_vessel}' 不存在于系统中")
action_sequence.append(action_log(f"目标容器验证通过: {target_vessel}", ""))
debug_print(f"目标容器 '{target_vessel}' 验证通过")
action_sequence.append(create_action_log(f"目标容器验证通过: {target_vessel}", ""))
# 2. 查找溶剂容器
action_sequence.append(action_log("正在查找溶剂容器...", "🔍"))
debug_print("🔍 步骤2: 查找溶剂容器...")
action_sequence.append(create_action_log("正在查找溶剂容器...", "🔍"))
try:
solvent_vessel = find_solvent_vessel(G, solvent)
debug_print(f"找到溶剂容器: {solvent_vessel}")
action_sequence.append(action_log(f"找到溶剂容器: {solvent_vessel}", ""))
debug_print(f"找到溶剂容器: {solvent_vessel}")
action_sequence.append(create_action_log(f"找到溶剂容器: {solvent_vessel}", ""))
except ValueError as e:
action_sequence.append(action_log(f"溶剂容器查找失败: {str(e)}", ""))
debug_print(f"溶剂容器查找失败: {str(e)}")
action_sequence.append(create_action_log(f"溶剂容器查找失败: {str(e)}", ""))
raise ValueError(f"无法找到溶剂 '{solvent}': {str(e)}")
# 3. 验证路径存在
action_sequence.append(action_log("正在验证传输路径...", "🛤️"))
debug_print("🔍 步骤3: 验证传输路径...")
action_sequence.append(create_action_log("正在验证传输路径...", "🛤️"))
try:
path = nx.shortest_path(G, source=solvent_vessel, target=target_vessel)
action_sequence.append(action_log(f"传输路径: {''.join(path)}", "🛤️"))
debug_print(f"✅ 找到路径: {''.join(path)}")
action_sequence.append(create_action_log(f"传输路径: {''.join(path)}", "🛤️"))
except nx.NetworkXNoPath:
action_sequence.append(action_log(f"路径不可达: {solvent_vessel}{target_vessel}", ""))
debug_print(f"路径不可达: {solvent_vessel}{target_vessel}")
action_sequence.append(create_action_log(f"路径不可达: {solvent_vessel}{target_vessel}", ""))
raise ValueError(f"从溶剂容器 '{solvent_vessel}' 到目标容器 '{target_vessel}' 没有可用路径")
# 4. 使用pump_protocol转移溶剂
action_sequence.append(action_log("开始溶剂转移操作...", "🚰"))
action_sequence.append(action_log(f"转移: {solvent_vessel}{target_vessel} ({volume}mL)", "🚛"))
debug_print("🔍 步骤4: 转移溶剂...")
action_sequence.append(create_action_log("开始溶剂转移操作...", "🚰"))
debug_print(f"🚛 开始转移: {solvent_vessel}{target_vessel}")
debug_print(f"💧 转移体积: {volume} mL")
action_sequence.append(create_action_log(f"转移: {solvent_vessel}{target_vessel} ({volume}mL)", "🚛"))
try:
action_sequence.append(action_log("正在生成泵送协议...", "🔄"))
debug_print("🔄 生成泵送协议...")
action_sequence.append(create_action_log("正在生成泵送协议...", "🔄"))
pump_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=solvent_vessel,
@@ -90,34 +256,41 @@ def generate_reset_handling_protocol(
amount="",
time=0.0,
viscous=False,
rinsing_solvent="",
rinsing_solvent="", # 重置处理不需要清洗
rinsing_volume=0.0,
rinsing_repeats=0,
solid=False,
flowrate=2.5,
transfer_flowrate=0.5
flowrate=2.5, # 正常流速
transfer_flowrate=0.5 # 正常转移流速
)
action_sequence.extend(pump_actions)
debug_print(f"泵送协议已添加: {len(pump_actions)} 个动作")
action_sequence.append(action_log(f"泵送协议完成 ({len(pump_actions)} 个操作)", ""))
debug_print(f"泵送协议已添加: {len(pump_actions)} 个动作")
action_sequence.append(create_action_log(f"泵送协议完成 ({len(pump_actions)} 个操作)", ""))
except Exception as e:
action_sequence.append(action_log(f"泵送协议生成失败: {str(e)}", ""))
debug_print(f"泵送协议生成失败: {str(e)}")
action_sequence.append(create_action_log(f"泵送协议生成失败: {str(e)}", ""))
raise ValueError(f"生成泵协议时出错: {str(e)}")
# 5. 等待溶剂稳定
action_sequence.append(action_log("等待溶剂稳定...", ""))
original_wait_time = 10.0
simulation_time_limit = 5.0
debug_print("🔍 步骤5: 等待溶剂稳定...")
action_sequence.append(create_action_log("等待溶剂稳定...", ""))
# 模拟运行时间优化
debug_print("⏱️ 检查模拟运行时间限制...")
original_wait_time = 10.0 # 原始等待时间
simulation_time_limit = 5.0 # 模拟运行时间限制5秒
final_wait_time = min(original_wait_time, simulation_time_limit)
if original_wait_time > simulation_time_limit:
action_sequence.append(action_log(f"时间优化: {original_wait_time}s → {final_wait_time}s", ""))
debug_print(f"🎮 模拟运行优化: {original_wait_time}s → {final_wait_time}s")
action_sequence.append(create_action_log(f"时间优化: {original_wait_time}s → {final_wait_time}s", ""))
else:
action_sequence.append(action_log(f"等待时间: {final_wait_time}s", ""))
debug_print(f"✅ 时间在限制内: {final_wait_time}s 保持不变")
action_sequence.append(create_action_log(f"等待时间: {final_wait_time}s", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
@@ -125,50 +298,90 @@ def generate_reset_handling_protocol(
"description": f"等待溶剂 {solvent} 在容器 {target_vessel} 中稳定" + (f" (模拟时间)" if original_wait_time != final_wait_time else "")
}
})
debug_print(f"✅ 稳定等待已添加: {final_wait_time}s")
# 显示时间调整信息
if original_wait_time != final_wait_time:
action_sequence.append(action_log("应用模拟时间优化", "🎭"))
debug_print(f"🎭 模拟优化说明: 原计划 {original_wait_time}s实际模拟 {final_wait_time}s")
action_sequence.append(create_action_log("应用模拟时间优化", "🎭"))
# 总结
debug_print(f"重置处理协议生成完成: {len(action_sequence)} 个动作, {solvent_vessel} -> {target_vessel}, {volume}mL")
debug_print("=" * 60)
debug_print(f"🎉 重置处理协议生成完成!")
debug_print(f"📊 总结信息:")
debug_print(f" 📋 总动作数: {len(action_sequence)}")
debug_print(f" 🧪 溶剂: {solvent}")
debug_print(f" 🥽 源容器: {solvent_vessel}")
debug_print(f" 🥽 目标容器: {target_vessel} {'(默认)' if vessel is None else '(指定)'}")
debug_print(f" 💧 转移体积: {volume} mL")
debug_print(f" ⏱️ 预计总时间: {(final_wait_time + 5):.0f}")
debug_print(f" 🎯 操作结果: 已添加 {volume} mL {solvent}{target_vessel}")
debug_print("=" * 60)
# 添加完成日志
summary_msg = f"重置处理完成: {target_vessel} (使用 {volume}mL {solvent})"
if vessel is None:
summary_msg += " [默认容器]"
else:
summary_msg += " [指定容器]"
action_sequence.append(action_log(summary_msg, "🎉"))
action_sequence.append(create_action_log(summary_msg, "🎉"))
return action_sequence
# === 便捷函数 ===
def reset_main_reactor(G: nx.DiGraph, solvent: str = "methanol", **kwargs) -> List[Dict[str, Any]]:
"""重置主反应器 (默认行为)"""
debug_print(f"🔄 重置主反应器,使用溶剂: {solvent}")
return generate_reset_handling_protocol(G, solvent=solvent, vessel=None, **kwargs)
def reset_custom_vessel(G: nx.DiGraph, vessel: str, solvent: str = "methanol", **kwargs) -> List[Dict[str, Any]]:
"""重置指定容器"""
debug_print(f"🔄 重置指定容器: {vessel},使用溶剂: {solvent}")
return generate_reset_handling_protocol(G, solvent=solvent, vessel=vessel, **kwargs)
def reset_with_water(G: nx.DiGraph, vessel: Optional[str] = None, **kwargs) -> List[Dict[str, Any]]:
"""使用水重置容器"""
target = vessel or "main_reactor"
debug_print(f"💧 使用水重置容器: {target}")
return generate_reset_handling_protocol(G, solvent="water", vessel=vessel, **kwargs)
def reset_with_methanol(G: nx.DiGraph, vessel: Optional[str] = None, **kwargs) -> List[Dict[str, Any]]:
"""使用甲醇重置容器"""
target = vessel or "main_reactor"
debug_print(f"🧪 使用甲醇重置容器: {target}")
return generate_reset_handling_protocol(G, solvent="methanol", vessel=vessel, **kwargs)
def reset_with_ethanol(G: nx.DiGraph, vessel: Optional[str] = None, **kwargs) -> List[Dict[str, Any]]:
"""使用乙醇重置容器"""
target = vessel or "main_reactor"
debug_print(f"🧪 使用乙醇重置容器: {target}")
return generate_reset_handling_protocol(G, solvent="ethanol", vessel=vessel, **kwargs)
# 测试函数
def test_reset_handling_protocol():
"""测试重置处理协议"""
debug_print("=== 重置处理协议测试 ===")
debug_print("测试完成")
debug_print("=== 重置处理协议增强中文版测试 ===")
# 测试溶剂名称
debug_print("🧪 测试常用溶剂名称...")
test_solvents = ["methanol", "ethanol", "water", "acetone", "dmso"]
for solvent in test_solvents:
debug_print(f" 🔍 测试溶剂: {solvent}")
# 测试容器参数
debug_print("🥽 测试容器参数...")
test_cases = [
{"solvent": "methanol", "vessel": None, "desc": "默认容器"},
{"solvent": "ethanol", "vessel": "reactor_2", "desc": "指定容器"},
{"solvent": "water", "vessel": "flask_1", "desc": "自定义容器"}
]
for case in test_cases:
debug_print(f" 🧪 测试案例: {case['desc']} - {case['solvent']} -> {case['vessel'] or 'main_reactor'}")
debug_print("✅ 测试完成")
if __name__ == "__main__":
test_reset_handling_protocol()
test_reset_handling_protocol()

View File

@@ -2,54 +2,60 @@ from typing import List, Dict, Any, Union
import networkx as nx
import logging
import re
from .utils.vessel_parser import get_vessel, find_solvent_vessel
from .utils.resource_helper import get_resource_id, get_resource_data, get_resource_liquid_volume, update_vessel_volume
from .utils.logger_util import debug_print
from .utils.vessel_parser import get_vessel
from .pump_protocol import generate_pump_protocol_with_rinsing
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[RUN_COLUMN] {message}")
def parse_percentage(pct_str: str) -> float:
"""
解析百分比字符串为数值
Args:
pct_str: 百分比字符串(如 "40 %", "40%", "40"
Returns:
float: 百分比数值0-100
"""
if not pct_str or not pct_str.strip():
return 0.0
pct_str = pct_str.strip().lower()
debug_print(f"🔍 解析百分比: '{pct_str}'")
# 移除百分号和空格
pct_clean = re.sub(r'[%\s]', '', pct_str)
# 提取数字
match = re.search(r'([0-9]*\.?[0-9]+)', pct_clean)
if match:
value = float(match.group(1))
debug_print(f"✅ 百分比解析结果: {value}%")
return value
debug_print(f"无法解析百分比: '{pct_str}'返回0.0")
debug_print(f"⚠️ 无法解析百分比: '{pct_str}'返回0.0")
return 0.0
def parse_ratio(ratio_str: str) -> tuple:
"""
解析比例字符串为两个数值
Args:
ratio_str: 比例字符串(如 "5:95", "1:1", "40:60"
Returns:
tuple: (ratio1, ratio2) 两个比例值(百分比)
tuple: (ratio1, ratio2) 两个比例值
"""
if not ratio_str or not ratio_str.strip():
return (50.0, 50.0)
return (50.0, 50.0) # 默认1:1
ratio_str = ratio_str.strip()
debug_print(f"🔍 解析比例: '{ratio_str}'")
# 支持多种分隔符:: / -
if ':' in ratio_str:
parts = ratio_str.split(':')
@@ -60,82 +66,101 @@ def parse_ratio(ratio_str: str) -> tuple:
elif 'to' in ratio_str.lower():
parts = ratio_str.lower().split('to')
else:
debug_print(f"无法解析比例格式: '{ratio_str}'使用默认1:1")
debug_print(f"⚠️ 无法解析比例格式: '{ratio_str}'使用默认1:1")
return (50.0, 50.0)
if len(parts) >= 2:
try:
ratio1 = float(parts[0].strip())
ratio2 = float(parts[1].strip())
total = ratio1 + ratio2
# 转换为百分比
pct1 = (ratio1 / total) * 100
pct2 = (ratio2 / total) * 100
debug_print(f"✅ 比例解析结果: {ratio1}:{ratio2} -> {pct1:.1f}%:{pct2:.1f}%")
return (pct1, pct2)
except ValueError as e:
debug_print(f"比例数值转换失败: {str(e)}")
debug_print(f"比例解析失败使用默认1:1")
debug_print(f"⚠️ 比例数值转换失败: {str(e)}")
debug_print(f"⚠️ 比例解析失败使用默认1:1")
return (50.0, 50.0)
def parse_rf_value(rf_str: str) -> float:
"""
解析Rf值字符串
Args:
rf_str: Rf值字符串"0.3", "0.45", "?"
Returns:
float: Rf值0-1
"""
if not rf_str or not rf_str.strip():
return 0.3
return 0.3 # 默认Rf值
rf_str = rf_str.strip().lower()
debug_print(f"🔍 解析Rf值: '{rf_str}'")
# 处理未知Rf值
if rf_str in ['?', 'unknown', 'tbd', 'to be determined']:
return 0.3
default_rf = 0.3
debug_print(f"❓ 检测到未知Rf值使用默认值: {default_rf}")
return default_rf
# 提取数字
match = re.search(r'([0-9]*\.?[0-9]+)', rf_str)
if match:
value = float(match.group(1))
# 确保Rf值在0-1范围内
if value > 1.0:
value = value / 100.0
value = max(0.0, min(1.0, value))
value = value / 100.0 # 可能是百分比形式
value = max(0.0, min(1.0, value)) # 限制在0-1范围
debug_print(f"✅ Rf值解析结果: {value}")
return value
debug_print(f"⚠️ 无法解析Rf值: '{rf_str}'使用默认值0.3")
return 0.3
def find_column_device(G: nx.DiGraph) -> str:
"""查找柱层析设备"""
debug_print("🔍 查找柱层析设备...")
# 查找虚拟柱设备
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if 'virtual_column' in node_class.lower() or 'column' in node_class.lower():
debug_print(f"找到柱层析设备: {node}")
debug_print(f"🎉 找到柱层析设备: {node}")
return node
# 如果没有找到,尝试创建虚拟设备名称
possible_names = ['column_1', 'virtual_column_1', 'chromatography_column_1']
for name in possible_names:
if name in G.nodes():
debug_print(f"找到柱设备: {name}")
debug_print(f"🎉 找到柱设备: {name}")
return name
debug_print("未找到柱层析设备将使用pump protocol直接转移")
debug_print("⚠️ 未找到柱层析设备将使用pump protocol直接转移")
return ""
def find_column_vessel(G: nx.DiGraph, column: str) -> str:
"""查找柱容器"""
debug_print(f"🔍 查找柱容器: '{column}'")
# 直接检查column参数是否是容器
if column in G.nodes():
node_type = G.nodes[column].get('type', '')
if node_type == 'container':
debug_print(f"🎉 找到柱容器: {column}")
return column
# 尝试常见的命名规则
possible_names = [
f"column_{column}",
f"{column}_column",
f"{column}_column",
f"vessel_{column}",
f"{column}_vessel",
"column_vessel",
@@ -144,25 +169,211 @@ def find_column_vessel(G: nx.DiGraph, column: str) -> str:
"preparative_column",
"column"
]
for vessel_name in possible_names:
if vessel_name in G.nodes():
node_type = G.nodes[vessel_name].get('type', '')
if node_type == 'container':
debug_print(f"🎉 找到柱容器: {vessel_name}")
return vessel_name
debug_print(f"⚠️ 未找到柱容器,将直接在源容器中进行分离")
return ""
def find_solvent_vessel(G: nx.DiGraph, solvent: str) -> str:
"""查找溶剂容器 - 增强版"""
if not solvent or not solvent.strip():
return ""
solvent = solvent.strip().replace(' ', '_').lower()
debug_print(f"🔍 查找溶剂容器: '{solvent}'")
# 🔧 方法1直接搜索 data.reagent_name
for node in G.nodes():
node_data = G.nodes[node].get('data', {})
node_type = G.nodes[node].get('type', '')
# 只搜索容器类型的节点
if node_type == 'container':
reagent_name = node_data.get('reagent_name', '').lower()
reagent_config = G.nodes[node].get('config', {}).get('reagent', '').lower()
# 检查 data.reagent_name 和 config.reagent
if reagent_name == solvent or reagent_config == solvent:
debug_print(f"🎉 通过reagent_name找到溶剂容器: {node} (reagent: {reagent_name or reagent_config}) ✨")
return node
# 模糊匹配 reagent_name
if solvent in reagent_name or reagent_name in solvent:
debug_print(f"🎉 通过reagent_name模糊匹配到溶剂容器: {node} (reagent: {reagent_name}) ✨")
return node
if solvent in reagent_config or reagent_config in solvent:
debug_print(f"🎉 通过config.reagent模糊匹配到溶剂容器: {node} (reagent: {reagent_config}) ✨")
return node
# 🔧 方法2常见的溶剂容器命名规则
possible_names = [
f"flask_{solvent}",
f"bottle_{solvent}",
f"reagent_{solvent}",
f"{solvent}_bottle",
f"{solvent}_flask",
f"solvent_{solvent}",
f"reagent_bottle_{solvent}"
]
for vessel_name in possible_names:
if vessel_name in G.nodes():
node_type = G.nodes[vessel_name].get('type', '')
if node_type == 'container':
debug_print(f"🎉 通过命名规则找到溶剂容器: {vessel_name}")
return vessel_name
# 🔧 方法3节点名称模糊匹配
for node in G.nodes():
node_type = G.nodes[node].get('type', '')
if node_type == 'container':
if ('flask_' in node or 'bottle_' in node or 'reagent_' in node) and solvent in node.lower():
debug_print(f"🎉 通过节点名称模糊匹配到溶剂容器: {node}")
return node
# 🔧 方法4特殊溶剂名称映射
solvent_mapping = {
'dmf': ['dmf', 'dimethylformamide', 'n,n-dimethylformamide'],
'ethyl_acetate': ['ethyl_acetate', 'ethylacetate', 'etoac', 'ea'],
'hexane': ['hexane', 'hexanes', 'n-hexane'],
'methanol': ['methanol', 'meoh', 'ch3oh'],
'water': ['water', 'h2o', 'distilled_water'],
'acetone': ['acetone', 'ch3coch3', '2-propanone'],
'dichloromethane': ['dichloromethane', 'dcm', 'ch2cl2', 'methylene_chloride'],
'chloroform': ['chloroform', 'chcl3', 'trichloromethane']
}
# 查找映射的同义词
for canonical_name, synonyms in solvent_mapping.items():
if solvent in synonyms:
debug_print(f"🔍 检测到溶剂同义词: '{solvent}' -> '{canonical_name}'")
return find_solvent_vessel(G, canonical_name) # 递归搜索
debug_print(f"⚠️ 未找到溶剂 '{solvent}' 的容器")
return ""
def get_vessel_liquid_volume(vessel: dict) -> float:
"""
获取容器中的液体体积 - 支持vessel字典
Args:
vessel: 容器字典
Returns:
float: 液体体积mL
"""
if not vessel or "data" not in vessel:
debug_print(f"⚠️ 容器数据为空,返回 0.0mL")
return 0.0
vessel_data = vessel["data"]
vessel_id = vessel.get("id", "unknown")
debug_print(f"🔍 读取容器 '{vessel_id}' 体积数据: {vessel_data}")
# 检查liquid_volume字段
if "liquid_volume" in vessel_data:
liquid_volume = vessel_data["liquid_volume"]
# 处理列表格式
if isinstance(liquid_volume, list):
if len(liquid_volume) > 0:
volume = liquid_volume[0]
if isinstance(volume, (int, float)):
debug_print(f"✅ 容器 '{vessel_id}' 体积: {volume}mL (列表格式)")
return float(volume)
# 处理直接数值格式
elif isinstance(liquid_volume, (int, float)):
debug_print(f"✅ 容器 '{vessel_id}' 体积: {liquid_volume}mL (数值格式)")
return float(liquid_volume)
# 检查其他可能的体积字段
volume_keys = ['current_volume', 'total_volume', 'volume']
for key in volume_keys:
if key in vessel_data:
try:
volume = float(vessel_data[key])
if volume > 0:
debug_print(f"✅ 容器 '{vessel_id}' 体积: {volume}mL (字段: {key})")
return volume
except (ValueError, TypeError):
continue
debug_print(f"⚠️ 无法获取容器 '{vessel_id}' 的体积,返回默认值 50.0mL")
return 50.0
def update_vessel_volume(vessel: dict, G: nx.DiGraph, new_volume: float, description: str = "") -> None:
"""
更新容器体积同时更新vessel字典和图节点
Args:
vessel: 容器字典
G: 网络图
new_volume: 新体积
description: 更新描述
"""
vessel_id = vessel.get("id", "unknown")
if description:
debug_print(f"🔧 更新容器体积 - {description}")
# 更新vessel字典中的体积
if "data" in vessel:
if "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
if len(current_volume) > 0:
vessel["data"]["liquid_volume"][0] = new_volume
else:
vessel["data"]["liquid_volume"] = [new_volume]
else:
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"] = {"liquid_volume": new_volume}
# 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = new_volume
else:
G.nodes[vessel_id]['data']['liquid_volume'] = [new_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = new_volume
debug_print(f"📊 容器 '{vessel_id}' 体积已更新为: {new_volume:.2f}mL")
def calculate_solvent_volumes(total_volume: float, pct1: float, pct2: float) -> tuple:
"""根据百分比计算溶剂体积"""
volume1 = (total_volume * pct1) / 100.0
volume2 = (total_volume * pct2) / 100.0
debug_print(f"🧮 溶剂体积计算: 总体积{total_volume}mL")
debug_print(f" - 溶剂1: {pct1}% = {volume1}mL")
debug_print(f" - 溶剂2: {pct2}% = {volume2}mL")
return (volume1, volume2)
def generate_run_column_protocol(
G: nx.DiGraph,
from_vessel: dict,
to_vessel: dict,
from_vessel: dict, # 🔧 修改:从字符串改为字典类型
to_vessel: dict, # 🔧 修改:从字符串改为字典类型
column: str,
rf: str = "",
pct1: str = "",
@@ -174,7 +385,7 @@ def generate_run_column_protocol(
) -> List[Dict[str, Any]]:
"""
生成柱层析分离的协议序列 - 支持vessel字典和体积运算
Args:
G: 有向图,节点为设备和容器,边为流体管道
from_vessel: 源容器字典从XDL传入
@@ -187,112 +398,173 @@ def generate_run_column_protocol(
solvent2: 第二种溶剂名称(可选)
ratio: 溶剂比例(如 "5:95"可选优先级高于pct1/pct2
**kwargs: 其他可选参数
Returns:
List[Dict[str, Any]]: 柱层析分离操作的动作序列
"""
# 🔧 核心修改从字典中提取容器ID
from_vessel_id, _ = get_vessel(from_vessel)
to_vessel_id, _ = get_vessel(to_vessel)
debug_print(f"开始生成柱层析协议: {from_vessel_id} -> {to_vessel_id}, column={column}")
debug_print("🏛️" * 20)
debug_print("🚀 开始生成柱层析协议支持vessel字典和体积运算")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 from_vessel: {from_vessel} (ID: {from_vessel_id})")
debug_print(f" 🥽 to_vessel: {to_vessel} (ID: {to_vessel_id})")
debug_print(f" 🏛️ column: '{column}'")
debug_print(f" 📊 rf: '{rf}'")
debug_print(f" 🧪 溶剂配比: pct1='{pct1}', pct2='{pct2}', ratio='{ratio}'")
debug_print(f" 🧪 溶剂名称: solvent1='{solvent1}', solvent2='{solvent2}'")
debug_print("🏛️" * 20)
action_sequence = []
# 记录柱层析前的容器状态
original_from_volume = get_resource_liquid_volume(from_vessel)
original_to_volume = get_resource_liquid_volume(to_vessel)
# 🔧 新增:记录柱层析前的容器状态
debug_print("🔍 记录柱层析前容器状态...")
original_from_volume = get_vessel_liquid_volume(from_vessel)
original_to_volume = get_vessel_liquid_volume(to_vessel)
debug_print(f"📊 柱层析前状态:")
debug_print(f" - 源容器 {from_vessel_id}: {original_from_volume:.2f}mL")
debug_print(f" - 目标容器 {to_vessel_id}: {original_to_volume:.2f}mL")
# === 参数验证 ===
if not from_vessel_id:
debug_print("📍 步骤1: 参数验证...")
if not from_vessel_id: # 🔧 使用 from_vessel_id
raise ValueError("from_vessel 参数不能为空")
if not to_vessel_id:
if not to_vessel_id: # 🔧 使用 to_vessel_id
raise ValueError("to_vessel 参数不能为空")
if not column:
raise ValueError("column 参数不能为空")
if from_vessel_id not in G.nodes():
if from_vessel_id not in G.nodes(): # 🔧 使用 from_vessel_id
raise ValueError(f"源容器 '{from_vessel_id}' 不存在于系统中")
if to_vessel_id not in G.nodes():
if to_vessel_id not in G.nodes(): # 🔧 使用 to_vessel_id
raise ValueError(f"目标容器 '{to_vessel_id}' 不存在于系统中")
debug_print("✅ 基本参数验证通过")
# === 参数解析 ===
debug_print("📍 步骤2: 参数解析...")
# 解析Rf值
final_rf = parse_rf_value(rf)
debug_print(f"🎯 最终Rf值: {final_rf}")
# 解析溶剂比例ratio优先级高于pct1/pct2
if ratio and ratio.strip():
final_pct1, final_pct2 = parse_ratio(ratio)
debug_print(f"📊 使用ratio参数: {final_pct1:.1f}% : {final_pct2:.1f}%")
else:
final_pct1 = parse_percentage(pct1) if pct1 else 50.0
final_pct2 = parse_percentage(pct2) if pct2 else 50.0
# 如果百分比和不是100%,进行归一化
total_pct = final_pct1 + final_pct2
if total_pct == 0:
final_pct1, final_pct2 = 50.0, 50.0
elif total_pct != 100.0:
final_pct1 = (final_pct1 / total_pct) * 100
final_pct2 = (final_pct2 / total_pct) * 100
debug_print(f"📊 使用百分比参数: {final_pct1:.1f}% : {final_pct2:.1f}%")
# 设置默认溶剂(如果未指定)
final_solvent1 = solvent1.strip() if solvent1 else "ethyl_acetate"
final_solvent2 = solvent2.strip() if solvent2 else "hexane"
debug_print(f"参数: rf={final_rf}, 溶剂={final_solvent1}:{final_solvent2} = {final_pct1:.1f}%:{final_pct2:.1f}%")
debug_print(f"🧪 最终溶剂: {final_solvent1} : {final_solvent2}")
# === 查找设备和容器 ===
debug_print("📍 步骤3: 查找设备和容器...")
# 查找柱层析设备
column_device_id = find_column_device(G)
# 查找柱容器
column_vessel = find_column_vessel(G, column)
# 查找溶剂容器
solvent1_vessel = find_solvent_vessel(G, final_solvent1)
solvent2_vessel = find_solvent_vessel(G, final_solvent2)
debug_print(f"🔧 设备映射:")
debug_print(f" - 柱设备: '{column_device_id}'")
debug_print(f" - 柱容器: '{column_vessel}'")
debug_print(f" - 溶剂1容器: '{solvent1_vessel}'")
debug_print(f" - 溶剂2容器: '{solvent2_vessel}'")
# === 获取源容器体积 ===
debug_print("📍 步骤4: 获取源容器体积...")
source_volume = original_from_volume
if source_volume <= 0:
source_volume = 50.0
source_volume = 50.0 # 默认体积
debug_print(f"⚠️ 无法获取源容器体积,使用默认值: {source_volume}mL")
else:
debug_print(f"✅ 源容器体积: {source_volume}mL")
# === 计算溶剂体积 ===
debug_print("📍 步骤5: 计算溶剂体积...")
# 洗脱溶剂通常是样品体积的2-5倍
total_elution_volume = source_volume * 3.0
solvent1_volume, solvent2_volume = calculate_solvent_volumes(
total_elution_volume, final_pct1, final_pct2
)
# === 执行柱层析流程 ===
debug_print("📍 步骤6: 执行柱层析流程...")
# 🔧 新增:体积变化跟踪变量
current_from_volume = source_volume
current_to_volume = original_to_volume
current_column_volume = 0.0
try:
# 步骤1: 样品上柱
if column_vessel and column_vessel != from_vessel_id:
# 步骤6.1: 样品上柱(如果有独立的柱容器)
if column_vessel and column_vessel != from_vessel_id: # 🔧 使用 from_vessel_id
debug_print(f"📍 6.1: 样品上柱 - {source_volume}mL 从 {from_vessel_id}{column_vessel}")
try:
sample_transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=from_vessel_id,
from_vessel=from_vessel_id, # 🔧 使用 from_vessel_id
to_vessel=column_vessel,
volume=source_volume,
flowrate=1.0,
flowrate=1.0, # 慢速上柱
transfer_flowrate=0.5,
rinsing_solvent="",
rinsing_solvent="", # 暂不冲洗
rinsing_volume=0.0,
rinsing_repeats=0
)
action_sequence.extend(sample_transfer_actions)
current_from_volume = 0.0
current_column_volume = source_volume
debug_print(f"✅ 样品上柱完成,添加了 {len(sample_transfer_actions)} 个动作")
# 🔧 新增:更新体积 - 样品转移到柱上
current_from_volume = 0.0 # 源容器体积变为0
current_column_volume = source_volume # 柱容器体积增加
update_vessel_volume(from_vessel, G, current_from_volume, "样品上柱后,源容器清空")
# 如果柱容器在图中,也更新其体积
if column_vessel in G.nodes():
if 'data' not in G.nodes[column_vessel]:
G.nodes[column_vessel]['data'] = {}
G.nodes[column_vessel]['data']['liquid_volume'] = current_column_volume
debug_print(f"📊 柱容器 '{column_vessel}' 体积更新为: {current_column_volume:.2f}mL")
except Exception as e:
debug_print(f"样品上柱失败: {str(e)}")
# 步骤2: 添加洗脱溶剂1
debug_print(f"⚠️ 样品上柱失败: {str(e)}")
# 步骤6.2: 添加洗脱溶剂1(如果有溶剂容器)
if solvent1_vessel and solvent1_volume > 0:
debug_print(f"📍 6.2: 添加洗脱溶剂1 - {solvent1_volume:.1f}mL {final_solvent1}")
try:
target_vessel = column_vessel if column_vessel else from_vessel_id
target_vessel = column_vessel if column_vessel else from_vessel_id # 🔧 使用 from_vessel_id
solvent1_transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=solvent1_vessel,
@@ -302,22 +574,27 @@ def generate_run_column_protocol(
transfer_flowrate=1.0
)
action_sequence.extend(solvent1_transfer_actions)
debug_print(f"✅ 溶剂1添加完成添加了 {len(solvent1_transfer_actions)} 个动作")
# 🔧 新增:更新体积 - 添加溶剂1
if target_vessel == column_vessel:
current_column_volume += solvent1_volume
if column_vessel in G.nodes():
G.nodes[column_vessel]['data']['liquid_volume'] = current_column_volume
debug_print(f"📊 柱容器体积增加: +{solvent1_volume:.2f}mL = {current_column_volume:.2f}mL")
elif target_vessel == from_vessel_id:
current_from_volume += solvent1_volume
update_vessel_volume(from_vessel, G, current_from_volume, "添加溶剂1后")
except Exception as e:
debug_print(f"溶剂1添加失败: {str(e)}")
# 步骤3: 添加洗脱溶剂2
debug_print(f"⚠️ 溶剂1添加失败: {str(e)}")
# 步骤6.3: 添加洗脱溶剂2(如果有溶剂容器)
if solvent2_vessel and solvent2_volume > 0:
debug_print(f"📍 6.3: 添加洗脱溶剂2 - {solvent2_volume:.1f}mL {final_solvent2}")
try:
target_vessel = column_vessel if column_vessel else from_vessel_id
target_vessel = column_vessel if column_vessel else from_vessel_id # 🔧 使用 from_vessel_id
solvent2_transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=solvent2_vessel,
@@ -327,26 +604,31 @@ def generate_run_column_protocol(
transfer_flowrate=1.0
)
action_sequence.extend(solvent2_transfer_actions)
debug_print(f"✅ 溶剂2添加完成添加了 {len(solvent2_transfer_actions)} 个动作")
# 🔧 新增:更新体积 - 添加溶剂2
if target_vessel == column_vessel:
current_column_volume += solvent2_volume
if column_vessel in G.nodes():
G.nodes[column_vessel]['data']['liquid_volume'] = current_column_volume
debug_print(f"📊 柱容器体积增加: +{solvent2_volume:.2f}mL = {current_column_volume:.2f}mL")
elif target_vessel == from_vessel_id:
current_from_volume += solvent2_volume
update_vessel_volume(from_vessel, G, current_from_volume, "添加溶剂2后")
except Exception as e:
debug_print(f"溶剂2添加失败: {str(e)}")
# 步骤4: 使用柱层析设备执行分离
debug_print(f"⚠️ 溶剂2添加失败: {str(e)}")
# 步骤6.4: 使用柱层析设备执行分离(如果有设备)
if column_device_id:
debug_print(f"📍 6.4: 使用柱层析设备执行分离")
column_separation_action = {
"device_id": column_device_id,
"action_name": "run_column",
"action_kwargs": {
"from_vessel": from_vessel_id,
"to_vessel": to_vessel_id,
"from_vessel": from_vessel_id, # 🔧 使用 from_vessel_id
"to_vessel": to_vessel_id, # 🔧 使用 to_vessel_id
"column": column,
"rf": rf,
"pct1": pct1,
@@ -357,65 +639,85 @@ def generate_run_column_protocol(
}
}
action_sequence.append(column_separation_action)
separation_time = max(30, min(120, int(total_elution_volume / 2)))
debug_print(f"✅ 柱层析设备动作已添加")
# 等待分离完成
separation_time = max(30, min(120, int(total_elution_volume / 2))) # 30-120秒基于体积
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": separation_time}
})
# 步骤5: 产物收集
if column_vessel and column_vessel != to_vessel_id:
debug_print(f"✅ 等待分离完成: {separation_time}")
# 步骤6.5: 产物收集(从柱容器到目标容器)
if column_vessel and column_vessel != to_vessel_id: # 🔧 使用 to_vessel_id
debug_print(f"📍 6.5: 产物收集 - 从 {column_vessel}{to_vessel_id}")
try:
# 估算产物体积原始样品体积的70-90%,收率考虑)
product_volume = source_volume * 0.8
product_transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=column_vessel,
to_vessel=to_vessel_id,
to_vessel=to_vessel_id, # 🔧 使用 to_vessel_id
volume=product_volume,
flowrate=1.5,
transfer_flowrate=0.8
)
action_sequence.extend(product_transfer_actions)
debug_print(f"✅ 产物收集完成,添加了 {len(product_transfer_actions)} 个动作")
# 🔧 新增:更新体积 - 产物收集到目标容器
current_to_volume += product_volume
current_column_volume -= product_volume
current_column_volume -= product_volume # 柱容器体积减少
update_vessel_volume(to_vessel, G, current_to_volume, "产物收集后")
# 更新柱容器体积
if column_vessel in G.nodes():
G.nodes[column_vessel]['data']['liquid_volume'] = max(0.0, current_column_volume)
debug_print(f"📊 柱容器体积减少: -{product_volume:.2f}mL = {current_column_volume:.2f}mL")
except Exception as e:
debug_print(f"产物收集失败: {str(e)}")
# 步骤6: 简化模式 - 直接转移
debug_print(f"⚠️ 产物收集失败: {str(e)}")
# 步骤6.6: 如果没有独立的柱设备和容器,执行简化的直接转移
if not column_device_id and not column_vessel:
debug_print(f"📍 6.6: 简化模式 - 直接转移 {source_volume}mL 从 {from_vessel_id}{to_vessel_id}")
try:
direct_transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=from_vessel_id,
to_vessel=to_vessel_id,
from_vessel=from_vessel_id, # 🔧 使用 from_vessel_id
to_vessel=to_vessel_id, # 🔧 使用 to_vessel_id
volume=source_volume,
flowrate=2.0,
transfer_flowrate=1.0
)
action_sequence.extend(direct_transfer_actions)
current_from_volume = 0.0
current_to_volume += source_volume
debug_print(f"✅ 直接转移完成,添加了 {len(direct_transfer_actions)} 个动作")
# 🔧 新增:更新体积 - 直接转移
current_from_volume = 0.0 # 源容器清空
current_to_volume += source_volume # 目标容器增加
update_vessel_volume(from_vessel, G, current_from_volume, "直接转移后,源容器清空")
update_vessel_volume(to_vessel, G, current_to_volume, "直接转移后,目标容器增加")
except Exception as e:
debug_print(f"直接转移失败: {str(e)}")
debug_print(f"⚠️ 直接转移失败: {str(e)}")
except Exception as e:
debug_print(f"协议生成失败: {str(e)}")
debug_print(f"协议生成失败: {str(e)} 😭")
# 不添加不确定的动作直接让action_sequence保持为空列表
# action_sequence 已经在函数开始时初始化为 []
# 确保至少有一个有效的动作,如果完全失败就返回空列表
if not action_sequence:
debug_print("⚠️ 没有生成任何有效动作")
# 可以选择返回空列表或添加一个基本的等待动作
action_sequence.append({
"action_name": "wait",
"action_kwargs": {
@@ -423,50 +725,83 @@ def generate_run_column_protocol(
"description": "柱层析协议执行完成"
}
})
final_from_volume = get_resource_liquid_volume(from_vessel)
final_to_volume = get_resource_liquid_volume(to_vessel)
debug_print(f"柱层析协议生成完成: {len(action_sequence)} 个动作, {from_vessel_id} -> {to_vessel_id}, 收集={final_to_volume - original_to_volume:.2f}mL")
# 🔧 新增:柱层析完成后的最终状态报告
final_from_volume = get_vessel_liquid_volume(from_vessel)
final_to_volume = get_vessel_liquid_volume(to_vessel)
# 🎊 总结
debug_print("🏛️" * 20)
debug_print(f"🎉 柱层析协议生成完成! ✨")
debug_print(f"📊 总动作数: {len(action_sequence)}")
debug_print(f"🥽 路径: {from_vessel_id}{to_vessel_id}")
debug_print(f"🏛️ 柱子: {column}")
debug_print(f"🧪 溶剂: {final_solvent1}:{final_solvent2} = {final_pct1:.1f}%:{final_pct2:.1f}%")
debug_print(f"📊 体积变化统计:")
debug_print(f" 源容器 {from_vessel_id}:")
debug_print(f" - 柱层析前: {original_from_volume:.2f}mL")
debug_print(f" - 柱层析后: {final_from_volume:.2f}mL")
debug_print(f" 目标容器 {to_vessel_id}:")
debug_print(f" - 柱层析前: {original_to_volume:.2f}mL")
debug_print(f" - 柱层析后: {final_to_volume:.2f}mL")
debug_print(f" - 收集体积: {final_to_volume - original_to_volume:.2f}mL")
debug_print(f"⏱️ 预计总时间: {len(action_sequence) * 5:.0f} 秒 ⌛")
debug_print("🏛️" * 20)
return action_sequence
# 便捷函数
def generate_ethyl_acetate_hexane_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
# 🔧 新增:便捷函数
def generate_ethyl_acetate_hexane_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
column: str, ratio: str = "30:70") -> List[Dict[str, Any]]:
"""乙酸乙酯-己烷柱层析(常用组合)"""
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
from_vessel_id = from_vessel["id"]
to_vessel_id = to_vessel["id"]
debug_print(f"🧪⛽ 乙酸乙酯-己烷柱层析: {from_vessel_id}{to_vessel_id} @ {ratio}")
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
solvent1="ethyl_acetate", solvent2="hexane", ratio=ratio)
def generate_methanol_dcm_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
def generate_methanol_dcm_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
column: str, ratio: str = "5:95") -> List[Dict[str, Any]]:
"""甲醇-二氯甲烷柱层析"""
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
from_vessel_id = from_vessel["id"]
to_vessel_id = to_vessel["id"]
debug_print(f"🧪🧪 甲醇-DCM柱层析: {from_vessel_id}{to_vessel_id} @ {ratio}")
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
solvent1="methanol", solvent2="dichloromethane", ratio=ratio)
def generate_gradient_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
column: str, start_ratio: str = "10:90",
def generate_gradient_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
column: str, start_ratio: str = "10:90",
end_ratio: str = "50:50") -> List[Dict[str, Any]]:
"""梯度洗脱柱层析(中等比例)"""
from_vessel_id, _ = get_vessel(from_vessel)
to_vessel_id, _ = get_vessel(to_vessel)
debug_print(f"📈 梯度柱层析: {from_vessel_id}{to_vessel_id} ({start_ratio}{end_ratio})")
# 使用中间比例作为近似
return generate_run_column_protocol(G, from_vessel, to_vessel, column, ratio="30:70")
def generate_polar_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
def generate_polar_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
column: str) -> List[Dict[str, Any]]:
"""极性化合物柱层析(高极性溶剂比例)"""
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
from_vessel_id, _ = get_vessel(from_vessel)
to_vessel_id, _ = get_vessel(to_vessel)
debug_print(f"⚡ 极性化合物柱层析: {from_vessel_id}{to_vessel_id}")
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
solvent1="ethyl_acetate", solvent2="hexane", ratio="70:30")
def generate_nonpolar_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
def generate_nonpolar_column_protocol(G: nx.DiGraph, from_vessel: dict, to_vessel: dict,
column: str) -> List[Dict[str, Any]]:
"""非极性化合物柱层析(低极性溶剂比例)"""
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
from_vessel_id, _ = get_vessel(from_vessel)
to_vessel_id, _ = get_vessel(to_vessel)
debug_print(f"🛢️ 非极性化合物柱层析: {from_vessel_id}{to_vessel_id}")
return generate_run_column_protocol(G, from_vessel, to_vessel, column,
solvent1="ethyl_acetate", solvent2="hexane", ratio="5:95")
# 测试函数
def test_run_column_protocol():
"""测试柱层析协议"""
debug_print("=== RUN COLUMN PROTOCOL 测试 ===")
debug_print("测试完成")
debug_print("🧪 === RUN COLUMN PROTOCOL 测试 ===")
debug_print("测试完成 🎉")
if __name__ == "__main__":
test_run_column_protocol()

View File

@@ -1,11 +1,41 @@
from functools import partial
import networkx as nx
import re
import logging
import sys
from typing import List, Dict, Any, Union
from .utils.vessel_parser import get_vessel, find_solvent_vessel, find_connected_stirrer
from .utils.resource_helper import get_resource_liquid_volume, update_vessel_volume
from .utils.logger_util import debug_print, action_log
from .utils.unit_parser import parse_volume_input
from .utils.vessel_parser import get_vessel
from .utils.logger_util import action_log
from .pump_protocol import generate_pump_protocol_with_rinsing
logger = logging.getLogger(__name__)
# 确保输出编码为UTF-8
if hasattr(sys.stdout, 'reconfigure'):
try:
sys.stdout.reconfigure(encoding='utf-8')
sys.stderr.reconfigure(encoding='utf-8')
except:
pass
def debug_print(message):
"""调试输出函数 - 支持中文"""
try:
# 确保消息是字符串格式
safe_message = str(message)
logger.info(f"[SEPARATE] {safe_message}")
except UnicodeEncodeError:
# 如果编码失败,尝试替换不支持的字符
safe_message = str(message).encode('utf-8', errors='replace').decode('utf-8')
logger.info(f"[SEPARATE] {safe_message}")
except Exception as e:
# 最后的安全措施
fallback_message = f"日志输出错误: {repr(message)}"
logger.info(f"[SEPARATE] {fallback_message}")
create_action_log = partial(action_log, prefix="[SEPARATE]")
def generate_separate_protocol(
G: nx.DiGraph,
@@ -63,33 +93,45 @@ def generate_separate_protocol(
# 🔧 核心修改从字典中提取容器ID
vessel_id, vessel_data = get_vessel(vessel)
debug_print(f"开始生成分离协议: vessel={vessel_id}, purpose={purpose}, "
f"product_phase={product_phase}, solvent={solvent}, "
f"volume={volume}, repeats={repeats}")
debug_print("🌀" * 20)
debug_print("🚀 开始生成分离协议支持vessel字典和体积运算")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 vessel: {vessel} (ID: {vessel_id})")
debug_print(f" 🎯 分离目的: '{purpose}'")
debug_print(f" 📊 产物相: '{product_phase}'")
debug_print(f" 💧 溶剂: '{solvent}'")
debug_print(f" 📏 体积: {volume} (类型: {type(volume)})")
debug_print(f" 🔄 重复次数: {repeats}")
debug_print(f" 🎯 产物容器: '{product_vessel}'")
debug_print(f" 🗑️ 废液容器: '{waste_vessel}'")
debug_print(f" 📦 其他参数: {kwargs}")
debug_print("🌀" * 20)
action_sequence = []
# 记录分离前的容器状态
original_liquid_volume = get_resource_liquid_volume(vessel)
debug_print(f"分离前液体体积: {original_liquid_volume:.2f}mL")
# 🔧 新增:记录分离前的容器状态
debug_print("🔍 记录分离前容器状态...")
original_liquid_volume = get_vessel_liquid_volume(vessel)
debug_print(f"📊 分离前液体体积: {original_liquid_volume:.2f}mL")
# === 参数验证和标准化 ===
action_sequence.append(action_log(f"开始分离操作 - 容器: {vessel_id}", "🎬", prefix="[SEPARATE]"))
action_sequence.append(action_log(f"分离目的: {purpose}", "🧪", prefix="[SEPARATE]"))
action_sequence.append(action_log(f"产物相: {product_phase}", "📊", prefix="[SEPARATE]"))
debug_print("🔍 步骤1: 参数验证和标准化...")
action_sequence.append(create_action_log(f"开始分离操作 - 容器: {vessel_id}", "🎬"))
action_sequence.append(create_action_log(f"分离目的: {purpose}", "🧪"))
action_sequence.append(create_action_log(f"产物相: {product_phase}", "📊"))
# 统一容器参数 - 支持字典和字符串
final_vessel_id = vessel_id
def extract_vessel_id(vessel_param):
if isinstance(vessel_param, dict):
return vessel_param.get("id", "")
elif isinstance(vessel_param, str):
return vessel_param
else:
return ""
to_vessel_result = get_vessel(to_vessel) if to_vessel else None
if to_vessel_result is None or to_vessel_result[0] == "":
to_vessel_result = get_vessel(product_vessel) if product_vessel else None
final_to_vessel_id = to_vessel_result[0] if to_vessel_result else ""
waste_vessel_result = get_vessel(waste_phase_to_vessel) if waste_phase_to_vessel else None
if waste_vessel_result is None or waste_vessel_result[0] == "":
waste_vessel_result = get_vessel(waste_vessel) if waste_vessel else None
final_waste_vessel_id = waste_vessel_result[0] if waste_vessel_result else ""
final_vessel_id, _ = vessel_id
final_to_vessel_id, _ = get_vessel(to_vessel) or get_vessel(product_vessel)
final_waste_vessel_id, _ = get_vessel(waste_phase_to_vessel) or get_vessel(waste_vessel)
# 统一体积参数
final_volume = parse_volume_input(volume or solvent_volume)
@@ -99,12 +141,16 @@ def generate_separate_protocol(
repeats = 1
debug_print(f"⚠️ 重复次数参数 <= 0自动设置为 1")
debug_print(f"标准化参数: vessel={final_vessel_id}, to={final_to_vessel_id}, "
f"waste={final_waste_vessel_id}, volume={final_volume}mL, repeats={repeats}")
debug_print(f"🔧 标准化后的参数:")
debug_print(f" 🥼 分离容器: '{final_vessel_id}'")
debug_print(f" 🎯 产物容器: '{final_to_vessel_id}'")
debug_print(f" 🗑️ 废液容器: '{final_waste_vessel_id}'")
debug_print(f" 📏 溶剂体积: {final_volume}mL")
debug_print(f" 🔄 重复次数: {repeats}")
action_sequence.append(action_log(f"分离容器: {final_vessel_id}", "🧪", prefix="[SEPARATE]"))
action_sequence.append(action_log(f"溶剂体积: {final_volume}mL", "📏", prefix="[SEPARATE]"))
action_sequence.append(action_log(f"重复次数: {repeats}", "🔄", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"分离容器: {final_vessel_id}", "🧪"))
action_sequence.append(create_action_log(f"溶剂体积: {final_volume}mL", "📏"))
action_sequence.append(create_action_log(f"重复次数: {repeats}", "🔄"))
# 验证必需参数
if not purpose:
@@ -114,68 +160,72 @@ def generate_separate_protocol(
if purpose not in ["wash", "extract", "separate"]:
debug_print(f"⚠️ 未知的分离目的 '{purpose}',使用默认值 'separate'")
purpose = "separate"
action_sequence.append(action_log(f"未知目的,使用: {purpose}", "⚠️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"未知目的,使用: {purpose}", "⚠️"))
if product_phase not in ["top", "bottom"]:
debug_print(f"⚠️ 未知的产物相 '{product_phase}',使用默认值 'top'")
product_phase = "top"
action_sequence.append(action_log(f"未知相别,使用: {product_phase}", "⚠️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"未知相别,使用: {product_phase}", "⚠️"))
action_sequence.append(action_log("参数验证通过", "", prefix="[SEPARATE]"))
debug_print("参数验证通过")
action_sequence.append(create_action_log("参数验证通过", ""))
# === 查找设备 ===
action_sequence.append(action_log("正在查找相关设备...", "🔍", prefix="[SEPARATE]"))
debug_print("🔍 步骤2: 查找设备...")
action_sequence.append(create_action_log("正在查找相关设备...", "🔍"))
# 查找分离器设备
separator_device = find_separator_device(G, final_vessel_id)
separator_device = find_separator_device(G, final_vessel_id) # 🔧 使用 final_vessel_id
if separator_device:
action_sequence.append(action_log(f"找到分离器设备: {separator_device}", "🧪", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"找到分离器设备: {separator_device}", "🧪"))
else:
debug_print("⚠️ 未找到分离器设备,可能无法执行分离")
action_sequence.append(action_log("未找到分离器设备", "⚠️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log("未找到分离器设备", "⚠️"))
# 查找搅拌器
stirrer_device = find_connected_stirrer(G, final_vessel_id)
stirrer_device = find_connected_stirrer(G, final_vessel_id) # 🔧 使用 final_vessel_id
if stirrer_device:
action_sequence.append(action_log(f"找到搅拌器: {stirrer_device}", "🌪️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"找到搅拌器: {stirrer_device}", "🌪️"))
else:
action_sequence.append(action_log("未找到搅拌器", "⚠️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log("未找到搅拌器", "⚠️"))
# 查找溶剂容器(如果需要)
solvent_vessel = ""
if solvent and solvent.strip():
try:
solvent_vessel = find_solvent_vessel(G, solvent)
except ValueError:
solvent_vessel = ""
solvent_vessel = find_solvent_vessel(G, solvent)
if solvent_vessel:
action_sequence.append(action_log(f"找到溶剂容器: {solvent_vessel}", "💧", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"找到溶剂容器: {solvent_vessel}", "💧"))
else:
action_sequence.append(action_log(f"未找到溶剂容器: {solvent}", "⚠️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"未找到溶剂容器: {solvent}", "⚠️"))
debug_print(f"设备配置: separator={separator_device}, stirrer={stirrer_device}, solvent_vessel={solvent_vessel}")
debug_print(f"📊 设备配置:")
debug_print(f" 🧪 分离器设备: '{separator_device}'")
debug_print(f" 🌪️ 搅拌器设备: '{stirrer_device}'")
debug_print(f" 💧 溶剂容器: '{solvent_vessel}'")
# === 执行分离流程 ===
action_sequence.append(action_log("开始分离工作流程", "🎯", prefix="[SEPARATE]"))
debug_print("🔍 步骤3: 执行分离流程...")
action_sequence.append(create_action_log("开始分离工作流程", "🎯"))
# 体积变化跟踪变量
# 🔧 新增:体积变化跟踪变量
current_volume = original_liquid_volume
try:
for repeat_idx in range(repeats):
cycle_num = repeat_idx + 1
debug_print(f"分离循环 {cycle_num}/{repeats} 开始")
action_sequence.append(action_log(f"分离循环 {cycle_num}/{repeats} 开始", "🔄", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮: 开始分离循环 {cycle_num}/{repeats}")
action_sequence.append(create_action_log(f"分离循环 {cycle_num}/{repeats} 开始", "🔄"))
# 步骤3.1: 添加溶剂(如果需要)
if solvent_vessel and final_volume > 0:
action_sequence.append(action_log(f"向分离容器添加 {final_volume}mL {solvent}", "💧", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮 步骤1: 添加溶剂 {solvent} ({final_volume}mL)")
action_sequence.append(create_action_log(f"向分离容器添加 {final_volume}mL {solvent}", "💧"))
try:
# 使用pump protocol添加溶剂
pump_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=solvent_vessel,
to_vessel=final_vessel_id,
to_vessel=final_vessel_id, # 🔧 使用 final_vessel_id
volume=final_volume,
amount="",
time=0.0,
@@ -192,27 +242,30 @@ def generate_separate_protocol(
**kwargs
)
action_sequence.extend(pump_actions)
action_sequence.append(action_log(f"溶剂转移完成 ({len(pump_actions)} 个操作)", "", prefix="[SEPARATE]"))
debug_print(f"✅ 溶剂添加完成,添加了 {len(pump_actions)} 个动作")
action_sequence.append(create_action_log(f"溶剂转移完成 ({len(pump_actions)} 个操作)", ""))
# 更新体积 - 添加溶剂后
# 🔧 新增:更新体积 - 添加溶剂后
current_volume += final_volume
update_vessel_volume(vessel, G, current_volume, f"添加{final_volume}mL {solvent}")
except Exception as e:
debug_print(f"❌ 溶剂添加失败: {str(e)}")
action_sequence.append(action_log(f"溶剂添加失败: {str(e)}", "", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"溶剂添加失败: {str(e)}", ""))
else:
action_sequence.append(action_log("无需添加溶剂", "⏭️", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮 步骤1: 无需添加溶剂")
action_sequence.append(create_action_log("无需添加溶剂", "⏭️"))
# 步骤3.2: 启动搅拌(如果有搅拌器)
if stirrer_device and stir_time > 0:
action_sequence.append(action_log(f"开始搅拌: {stir_speed}rpm持续 {stir_time}s", "🌪️", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮 步骤2: 开始搅拌 ({stir_speed}rpm持续 {stir_time}s)")
action_sequence.append(create_action_log(f"开始搅拌: {stir_speed}rpm持续 {stir_time}s", "🌪️"))
action_sequence.append({
"device_id": stirrer_device,
"action_name": "start_stir",
"action_kwargs": {
"vessel": {"id": final_vessel_id},
"vessel": {"id": final_vessel_id}, # 🔧 使用 final_vessel_id
"stir_speed": stir_speed,
"purpose": f"分离混合 - {purpose}"
}
@@ -220,37 +273,43 @@ def generate_separate_protocol(
# 搅拌等待
stir_minutes = stir_time / 60
action_sequence.append(action_log(f"搅拌中,持续 {stir_minutes:.1f} 分钟", "⏱️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"搅拌中,持续 {stir_minutes:.1f} 分钟", "⏱️"))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": stir_time}
})
# 停止搅拌
action_sequence.append(action_log("停止搅拌器", "🛑", prefix="[SEPARATE]"))
action_sequence.append(create_action_log("停止搅拌器", "🛑"))
action_sequence.append({
"device_id": stirrer_device,
"action_name": "stop_stir",
"action_kwargs": {"vessel": final_vessel_id}
"action_kwargs": {"vessel": final_vessel_id} # 🔧 使用 final_vessel_id
})
else:
action_sequence.append(action_log("无需搅拌", "⏭️", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮 步骤2: 无需搅拌")
action_sequence.append(create_action_log("无需搅拌", "⏭️"))
# 步骤3.3: 静置分层
if settling_time > 0:
debug_print(f"🔄 第{cycle_num}轮 步骤3: 静置分层 ({settling_time}s)")
settling_minutes = settling_time / 60
action_sequence.append(action_log(f"静置分层 ({settling_minutes:.1f} 分钟)", "⚖️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"静置分层 ({settling_minutes:.1f} 分钟)", "⚖️"))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": settling_time}
})
else:
action_sequence.append(action_log("未指定静置时间", "⏭️", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮 步骤3: 未指定静置时间")
action_sequence.append(create_action_log("未指定静置时间", "⏭️"))
# 步骤3.4: 执行分离操作
if separator_device:
action_sequence.append(action_log(f"执行分离: 收集{product_phase}", "🧪", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮 步骤4: 执行分离操作")
action_sequence.append(create_action_log(f"执行分离: 收集{product_phase}", "🧪"))
# 🔧 替换为具体的分离操作逻辑基于old版本
# 首先进行分液判断(电导突跃)
action_sequence.append({
@@ -265,10 +324,11 @@ def generate_separate_protocol(
phase_volume = current_volume / 2
# 智能查找分离容器底部
separation_vessel_bottom = find_separation_vessel_bottom(G, final_vessel_id)
separation_vessel_bottom = find_separation_vessel_bottom(G, final_vessel_id) # ✅
if product_phase == "bottom":
action_sequence.append(action_log("收集底相产物", "📦", prefix="[SEPARATE]"))
debug_print(f"🔄 收集底相产物{final_to_vessel_id}")
action_sequence.append(create_action_log("收集底相产物", "📦"))
# 产物转移到目标瓶
if final_to_vessel_id:
@@ -304,7 +364,8 @@ def generate_separate_protocol(
action_sequence.extend(pump_actions)
elif product_phase == "top":
action_sequence.append(action_log("收集上相产物", "📦", prefix="[SEPARATE]"))
debug_print(f"🔄 收集上相产物{final_to_vessel_id}")
action_sequence.append(create_action_log("收集上相产物", "📦"))
# 弃去下面那一相进废液
if final_waste_vessel_id:
@@ -339,9 +400,10 @@ def generate_separate_protocol(
)
action_sequence.extend(pump_actions)
action_sequence.append(action_log("分离操作完成", "", prefix="[SEPARATE]"))
debug_print(f"分离操作完成")
action_sequence.append(create_action_log("分离操作完成", ""))
# 分离后体积估算
# 🔧 新增:分离后体积估算
separated_volume = phase_volume * 0.95 # 假设5%损失,只保留产物相体积
update_vessel_volume(vessel, G, separated_volume, f"分离操作后(第{cycle_num}轮)")
current_volume = separated_volume
@@ -349,21 +411,23 @@ def generate_separate_protocol(
# 收集结果
if final_to_vessel_id:
action_sequence.append(
action_log(f"产物 ({product_phase}相) 收集到: {final_to_vessel_id}", "📦", prefix="[SEPARATE]"))
create_action_log(f"产物 ({product_phase}相) 收集到: {final_to_vessel_id}", "📦"))
if final_waste_vessel_id:
action_sequence.append(action_log(f"废相收集到: {final_waste_vessel_id}", "🗑️", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"废相收集到: {final_waste_vessel_id}", "🗑️"))
else:
action_sequence.append(action_log("无分离器设备可用", "", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮 步骤4: 无分离器设备,跳过分离")
action_sequence.append(create_action_log("无分离器设备可用", ""))
# 添加等待时间模拟分离
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 10.0}
})
# 如果不是最后一次,从中转瓶转移回分液漏斗
# 🔧 新增:如果不是最后一次,从中转瓶转移回分液漏斗基于old版本逻辑
if repeat_idx < repeats - 1 and final_to_vessel_id and final_to_vessel_id != final_vessel_id:
action_sequence.append(action_log("产物转回分离容器准备下一轮", "🔄", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮: 产物转回分离容器准备下一轮")
action_sequence.append(create_action_log("产物转回分离容器,准备下一轮", "🔄"))
pump_actions = generate_pump_protocol_with_rinsing(
G=G,
@@ -380,85 +444,368 @@ def generate_separate_protocol(
# 循环间等待(除了最后一次)
if repeat_idx < repeats - 1:
action_sequence.append(action_log("等待下一次循环...", "", prefix="[SEPARATE]"))
debug_print(f"🔄 第{cycle_num}轮: 等待下一次循环...")
action_sequence.append(create_action_log("等待下一次循环...", ""))
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": 5}
})
else:
action_sequence.append(action_log(f"分离循环 {cycle_num}/{repeats} 完成", "🌟", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"分离循环 {cycle_num}/{repeats} 完成", "🌟"))
except Exception as e:
debug_print(f"❌ 分离工作流程执行失败: {str(e)}")
action_sequence.append(action_log(f"分离工作流程失败: {str(e)}", "", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(f"分离工作流程失败: {str(e)}", ""))
# 分离完成后的最终状态报告
final_liquid_volume = get_resource_liquid_volume(vessel)
# 🔧 新增:分离完成后的最终状态报告
final_liquid_volume = get_vessel_liquid_volume(vessel)
# === 最终结果 ===
total_time = (stir_time + settling_time + 15) * repeats # 估算总时间
debug_print(f"分离协议生成完成: {len(action_sequence)} 个动作, "
f"预计 {total_time:.0f}s, 体积 {original_liquid_volume:.2f}{final_liquid_volume:.2f}mL")
debug_print("🌀" * 20)
debug_print(f"🎉 分离协议生成完成")
debug_print(f"📊 协议统计:")
debug_print(f" 📋 总动作数: {len(action_sequence)}")
debug_print(f" ⏱️ 预计总时间: {total_time:.0f}s ({total_time / 60:.1f} 分钟)")
debug_print(f" 🥼 分离容器: {final_vessel_id}")
debug_print(f" 🎯 分离目的: {purpose}")
debug_print(f" 📊 产物相: {product_phase}")
debug_print(f" 🔄 重复次数: {repeats}")
debug_print(f"💧 体积变化统计:")
debug_print(f" - 分离前体积: {original_liquid_volume:.2f}mL")
debug_print(f" - 分离后体积: {final_liquid_volume:.2f}mL")
if solvent:
debug_print(f" 💧 溶剂: {solvent} ({final_volume}mL × {repeats}轮 = {final_volume * repeats:.2f}mL)")
if final_to_vessel_id:
debug_print(f" 🎯 产物容器: {final_to_vessel_id}")
if final_waste_vessel_id:
debug_print(f" 🗑️ 废液容器: {final_waste_vessel_id}")
debug_print("🌀" * 20)
# 添加完成日志
summary_msg = f"分离协议完成: {final_vessel_id} ({purpose}{repeats} 次循环)"
if solvent:
summary_msg += f",使用 {final_volume * repeats:.2f}mL {solvent}"
action_sequence.append(action_log(summary_msg, "🎉", prefix="[SEPARATE]"))
action_sequence.append(create_action_log(summary_msg, "🎉"))
return action_sequence
def parse_volume_input(volume_input: Union[str, float]) -> float:
"""
解析体积输入,支持带单位的字符串
Args:
volume_input: 体积输入(如 "200 mL", "?", 50.0
Returns:
float: 体积(毫升)
"""
if isinstance(volume_input, (int, float)):
debug_print(f"📏 体积输入为数值: {volume_input}")
return float(volume_input)
if not volume_input or not str(volume_input).strip():
debug_print(f"⚠️ 体积输入为空,返回 0.0mL")
return 0.0
volume_str = str(volume_input).lower().strip()
debug_print(f"🔍 解析体积输入: '{volume_str}'")
# 处理未知体积
if volume_str in ['?', 'unknown', 'tbd', 'to be determined', '未知', '待定']:
default_volume = 100.0 # 默认100mL
debug_print(f"❓ 检测到未知体积,使用默认值: {default_volume}mL")
return default_volume
# 移除空格并提取数字和单位
volume_clean = re.sub(r'\s+', '', volume_str)
# 匹配数字和单位的正则表达式
match = re.match(r'([0-9]*\.?[0-9]+)\s*(ml|l|μl|ul|microliter|milliliter|liter|毫升|升|微升)?', volume_clean)
if not match:
debug_print(f"⚠️ 无法解析体积: '{volume_str}',使用默认值 100mL")
return 100.0
value = float(match.group(1))
unit = match.group(2) or 'ml' # 默认单位为毫升
# 转换为毫升
if unit in ['l', 'liter', '']:
volume = value * 1000.0 # L -> mL
debug_print(f"🔄 体积转换: {value}L -> {volume}mL")
elif unit in ['μl', 'ul', 'microliter', '微升']:
volume = value / 1000.0 # μL -> mL
debug_print(f"🔄 体积转换: {value}μL -> {volume}mL")
else: # ml, milliliter, 毫升 或默认
volume = value # 已经是mL
debug_print(f"✅ 体积已为毫升单位: {volume}mL")
return volume
def find_solvent_vessel(G: nx.DiGraph, solvent: str) -> str:
"""查找溶剂容器,支持多种匹配模式"""
if not solvent or not solvent.strip():
debug_print("⏭️ 未指定溶剂,跳过溶剂容器查找")
return ""
debug_print(f"🔍 正在查找溶剂 '{solvent}' 的容器...")
# 🔧 方法1直接搜索 data.reagent_name 和 config.reagent
debug_print(f"📋 方法1: 搜索试剂字段...")
for node in G.nodes():
node_data = G.nodes[node].get('data', {})
node_type = G.nodes[node].get('type', '')
config_data = G.nodes[node].get('config', {})
# 只搜索容器类型的节点
if node_type == 'container':
reagent_name = node_data.get('reagent_name', '').lower()
config_reagent = config_data.get('reagent', '').lower()
# 精确匹配
if reagent_name == solvent.lower() or config_reagent == solvent.lower():
debug_print(f"✅ 通过试剂字段精确匹配找到容器: {node}")
return node
# 模糊匹配
if (solvent.lower() in reagent_name and reagent_name) or \
(solvent.lower() in config_reagent and config_reagent):
debug_print(f"✅ 通过试剂字段模糊匹配找到容器: {node}")
return node
# 🔧 方法2常见的容器命名规则
debug_print(f"📋 方法2: 使用命名规则...")
solvent_clean = solvent.lower().replace(' ', '_').replace('-', '_')
possible_names = [
f"flask_{solvent_clean}",
f"bottle_{solvent_clean}",
f"vessel_{solvent_clean}",
f"{solvent_clean}_flask",
f"{solvent_clean}_bottle",
f"solvent_{solvent_clean}",
f"reagent_{solvent_clean}",
f"reagent_bottle_{solvent_clean}",
f"reagent_bottle_1", # 通用试剂瓶
f"reagent_bottle_2",
f"reagent_bottle_3"
]
debug_print(f"🎯 尝试的容器名称: {possible_names[:5]}... (共 {len(possible_names)} 个)")
for name in possible_names:
if name in G.nodes():
node_type = G.nodes[name].get('type', '')
if node_type == 'container':
debug_print(f"✅ 通过命名规则找到容器: {name}")
return name
# 🔧 方法3使用第一个试剂瓶作为备选
debug_print(f"📋 方法3: 查找备用试剂瓶...")
for node_id in G.nodes():
node_data = G.nodes[node_id]
if (node_data.get('type') == 'container' and
('reagent' in node_id.lower() or 'bottle' in node_id.lower())):
debug_print(f"⚠️ 未找到专用容器,使用备用容器: {node_id}")
return node_id
debug_print(f"❌ 无法找到溶剂 '{solvent}' 的容器")
return ""
def find_separator_device(G: nx.DiGraph, vessel: str) -> str:
"""查找分离器设备,支持多种查找方式"""
debug_print(f"🔍 正在查找容器 '{vessel}' 的分离器设备...")
# 方法1查找连接到容器的分离器设备
debug_print(f"📋 方法1: 检查连接的分离器...")
separator_nodes = []
for node in G.nodes():
node_class = G.nodes[node].get('class', '').lower()
if 'separator' in node_class:
separator_nodes.append(node)
debug_print(f"📋 发现分离器设备: {node}")
# 检查是否连接到目标容器
if G.has_edge(node, vessel) or G.has_edge(vessel, node):
debug_print(f"✅ 找到连接的分离器: {node}")
return node
debug_print(f"📊 找到的分离器总数: {len(separator_nodes)}")
# 方法2根据命名规则查找
debug_print(f"📋 方法2: 使用命名规则...")
possible_names = [
f"{vessel}_controller",
f"{vessel}_separator",
vessel, # 容器本身可能就是分离器
"separator_1",
"virtual_separator",
"liquid_handler_1",
"liquid_handler_1", # 液体处理器也可能用于分离
"controller_1"
]
debug_print(f"🎯 尝试的分离器名称: {possible_names}")
for name in possible_names:
if name in G.nodes():
node_class = G.nodes[name].get('class', '').lower()
if 'separator' in node_class or 'controller' in node_class:
debug_print(f"✅ 通过命名规则找到分离器: {name}")
return name
# 方法3使用第一个可用分离器
# 方法3查找第一个分离器设备
debug_print(f"📋 方法3: 使用第一个可用分离器...")
if separator_nodes:
debug_print(f"⚠️ 使用第一个分离器设备: {separator_nodes[0]}")
return separator_nodes[0]
debug_print(f"❌ 未找到分离器设备")
return ""
def find_connected_stirrer(G: nx.DiGraph, vessel: str) -> str:
"""查找连接到指定容器的搅拌器"""
debug_print(f"🔍 正在查找与容器 {vessel} 连接的搅拌器...")
stirrer_nodes = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if 'stirrer' in node_class.lower():
stirrer_nodes.append(node)
debug_print(f"📋 发现搅拌器: {node}")
debug_print(f"📊 找到的搅拌器总数: {len(stirrer_nodes)}")
# 检查哪个搅拌器与目标容器相连
for stirrer in stirrer_nodes:
if G.has_edge(stirrer, vessel) or G.has_edge(vessel, stirrer):
debug_print(f"✅ 找到连接的搅拌器: {stirrer}")
return stirrer
# 如果没有连接的搅拌器,返回第一个可用的
if stirrer_nodes:
debug_print(f"⚠️ 未找到直接连接的搅拌器,使用第一个可用的: {stirrer_nodes[0]}")
return stirrer_nodes[0]
debug_print("❌ 未找到搅拌器")
return ""
def get_vessel_liquid_volume(vessel: dict) -> float:
"""
获取容器中的液体体积 - 支持vessel字典
Args:
vessel: 容器字典
Returns:
float: 液体体积mL
"""
if not vessel or "data" not in vessel:
debug_print(f"⚠️ 容器数据为空,返回 0.0mL")
return 0.0
vessel_data = vessel["data"]
vessel_id = vessel.get("id", "unknown")
debug_print(f"🔍 读取容器 '{vessel_id}' 体积数据: {vessel_data}")
# 检查liquid_volume字段
if "liquid_volume" in vessel_data:
liquid_volume = vessel_data["liquid_volume"]
# 处理列表格式
if isinstance(liquid_volume, list):
if len(liquid_volume) > 0:
volume = liquid_volume[0]
if isinstance(volume, (int, float)):
debug_print(f"✅ 容器 '{vessel_id}' 体积: {volume}mL (列表格式)")
return float(volume)
# 处理直接数值格式
elif isinstance(liquid_volume, (int, float)):
debug_print(f"✅ 容器 '{vessel_id}' 体积: {liquid_volume}mL (数值格式)")
return float(liquid_volume)
# 检查其他可能的体积字段
volume_keys = ['current_volume', 'total_volume', 'volume']
for key in volume_keys:
if key in vessel_data:
try:
volume = float(vessel_data[key])
if volume > 0:
debug_print(f"✅ 容器 '{vessel_id}' 体积: {volume}mL (字段: {key})")
return volume
except (ValueError, TypeError):
continue
debug_print(f"⚠️ 无法获取容器 '{vessel_id}' 的体积,返回默认值 50.0mL")
return 50.0
def update_vessel_volume(vessel: dict, G: nx.DiGraph, new_volume: float, description: str = "") -> None:
"""
更新容器体积同时更新vessel字典和图节点
Args:
vessel: 容器字典
G: 网络图
new_volume: 新体积
description: 更新描述
"""
vessel_id = vessel.get("id", "unknown")
if description:
debug_print(f"🔧 更新容器体积 - {description}")
# 更新vessel字典中的体积
if "data" in vessel:
if "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
if len(current_volume) > 0:
vessel["data"]["liquid_volume"][0] = new_volume
else:
vessel["data"]["liquid_volume"] = [new_volume]
else:
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"] = {"liquid_volume": new_volume}
# 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = new_volume
else:
G.nodes[vessel_id]['data']['liquid_volume'] = [new_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = new_volume
debug_print(f"📊 容器 '{vessel_id}' 体积已更新为: {new_volume:.2f}mL")
def find_separation_vessel_bottom(G: nx.DiGraph, vessel_id: str) -> str:
"""
智能查找分离容器的底部容器假设为flask或vessel类型
Args:
G: 网络图
vessel_id: 分离容器ID
Returns:
str: 底部容器ID
"""
debug_print(f"🔍 查找分离容器 {vessel_id} 的底部容器...")
# 方法1根据命名规则推测
possible_bottoms = [
f"{vessel_id}_bottom",
@@ -467,25 +814,32 @@ def find_separation_vessel_bottom(G: nx.DiGraph, vessel_id: str) -> str:
f"{vessel_id}_flask",
f"{vessel_id}_vessel"
]
debug_print(f"📋 尝试的底部容器名称: {possible_bottoms}")
for bottom_id in possible_bottoms:
if bottom_id in G.nodes():
node_type = G.nodes[bottom_id].get('type', '')
if node_type == 'container':
debug_print(f"✅ 通过命名规则找到底部容器: {bottom_id}")
return bottom_id
# 方法2查找与分离器相连的容器
# 方法2查找与分离器相连的容器(假设底部容器会与分离器相连)
debug_print(f"📋 方法2: 查找连接的容器...")
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if 'separator' in node_class.lower():
# 检查分离器的输入端
if G.has_edge(node, vessel_id):
for neighbor in G.neighbors(node):
if neighbor != vessel_id:
neighbor_type = G.nodes[neighbor].get('type', '')
if neighbor_type == 'container':
debug_print(f"✅ 通过连接找到底部容器: {neighbor}")
return neighbor
debug_print(f"❌ 无法找到分离容器 {vessel_id} 的底部容器")
return ""

View File

@@ -1,40 +1,116 @@
from typing import List, Dict, Any, Union
import networkx as nx
import logging
import re
from .utils.unit_parser import parse_time_input
from .utils.resource_helper import get_resource_id, get_resource_display_info
from .utils.logger_util import debug_print
from .utils.vessel_parser import find_connected_stirrer
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[STIR] {message}")
def find_connected_stirrer(G: nx.DiGraph, vessel: str = None) -> str:
"""查找与指定容器相连的搅拌设备"""
debug_print(f"🔍 查找搅拌设备,目标容器: {vessel} 🥽")
# 🔧 查找所有搅拌设备
stirrer_nodes = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
if 'stirrer' in node_class.lower() or 'virtual_stirrer' in node_class:
stirrer_nodes.append(node)
debug_print(f"🎉 找到搅拌设备: {node} 🌪️")
# 🔗 检查连接
if vessel and stirrer_nodes:
for stirrer in stirrer_nodes:
if G.has_edge(stirrer, vessel) or G.has_edge(vessel, stirrer):
debug_print(f"✅ 搅拌设备 '{stirrer}' 与容器 '{vessel}' 相连 🔗")
return stirrer
# 🎯 使用第一个可用设备
if stirrer_nodes:
selected = stirrer_nodes[0]
debug_print(f"🔧 使用第一个搅拌设备: {selected} 🌪️")
return selected
# 🆘 默认设备
debug_print("⚠️ 未找到搅拌设备,使用默认设备 🌪️")
return "stirrer_1"
def validate_and_fix_params(stir_time: float, stir_speed: float, settling_time: float) -> tuple:
"""验证和修正参数"""
# ⏰ 搅拌时间验证
if stir_time < 0:
debug_print(f"搅拌时间 {stir_time}s 无效,修正为 100s")
debug_print(f"⚠️ 搅拌时间 {stir_time}s 无效,修正为 100s 🕐")
stir_time = 100.0
elif stir_time > 100: # 限制为100s
debug_print(f"搅拌时间 {stir_time}s 过长,仿真运行时修正为 100s")
debug_print(f"⚠️ 搅拌时间 {stir_time}s 过长,仿真运行时修正为 100s 🕐")
stir_time = 100.0
else:
debug_print(f"✅ 搅拌时间 {stir_time}s ({stir_time/60:.1f}分钟) 有效 ⏰")
# 🌪️ 搅拌速度验证
if stir_speed < 10.0 or stir_speed > 1500.0:
debug_print(f"搅拌速度 {stir_speed} RPM 超出范围,修正为 300 RPM")
debug_print(f"⚠️ 搅拌速度 {stir_speed} RPM 超出范围,修正为 300 RPM 🌪️")
stir_speed = 300.0
else:
debug_print(f"✅ 搅拌速度 {stir_speed} RPM 在正常范围内 🌪️")
# ⏱️ 沉降时间验证
if settling_time < 0 or settling_time > 600: # 限制为10分钟
debug_print(f"沉降时间 {settling_time}s 超出范围,修正为 60s")
debug_print(f"⚠️ 沉降时间 {settling_time}s 超出范围,修正为 60s ⏱️")
settling_time = 60.0
else:
debug_print(f"✅ 沉降时间 {settling_time}s 在正常范围内 ⏱️")
return stir_time, stir_speed, settling_time
def extract_vessel_id(vessel) -> str:
"""从vessel参数中提取vessel_id兼容 str / dict / ResourceDictInstance"""
return get_resource_id(vessel)
def extract_vessel_id(vessel: Union[str, dict]) -> str:
"""
从vessel参数中提取vessel_id
Args:
vessel: vessel字典或vessel_id字符串
Returns:
str: vessel_id
"""
if isinstance(vessel, dict):
vessel_id = list(vessel.values())[0].get("id", "")
debug_print(f"🔧 从vessel字典提取ID: {vessel_id}")
return vessel_id
elif isinstance(vessel, str):
debug_print(f"🔧 vessel参数为字符串: {vessel}")
return vessel
else:
debug_print(f"⚠️ 无效的vessel参数类型: {type(vessel)}")
return ""
def get_vessel_display_info(vessel) -> str:
"""获取容器的显示信息(用于日志),兼容 str / dict / ResourceDictInstance"""
return get_resource_display_info(vessel)
def get_vessel_display_info(vessel: Union[str, dict]) -> str:
"""
获取容器的显示信息(用于日志)
Args:
vessel: vessel字典或vessel_id字符串
Returns:
str: 显示信息
"""
if isinstance(vessel, dict):
vessel_id = vessel.get("id", "unknown")
vessel_name = vessel.get("name", "")
if vessel_name:
return f"{vessel_id} ({vessel_name})"
else:
return vessel_id
else:
return str(vessel)
def generate_stir_protocol(
G: nx.DiGraph,
@@ -49,13 +125,16 @@ def generate_stir_protocol(
) -> List[Dict[str, Any]]:
"""生成搅拌操作的协议序列 - 修复vessel参数传递"""
# 🔧 核心修改正确处理vessel参数
vessel_id = extract_vessel_id(vessel)
vessel_display = get_vessel_display_info(vessel)
# 确保vessel_resource是完整的Resource对象
# 🔧 关键修复:确保vessel_resource是完整的Resource对象
if isinstance(vessel, dict):
vessel_resource = vessel
vessel_resource = vessel # 已经是完整的Resource字典
debug_print(f"✅ 使用传入的vessel Resource对象")
else:
# 如果只是字符串构建一个基本的Resource对象
vessel_resource = {
"id": vessel,
"name": "",
@@ -71,60 +150,91 @@ def generate_stir_protocol(
"sample_id": "",
"type": ""
}
# 参数验证
if not vessel_id:
debug_print(f"🔧 构建了基本的vessel Resource对象: {vessel}")
debug_print("🌪️" * 20)
debug_print("🚀 开始生成搅拌协议支持vessel字典")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 vessel: {vessel_display} (ID: {vessel_id})")
debug_print(f" ⏰ time: {time}")
debug_print(f" 🕐 stir_time: {stir_time}")
debug_print(f" 🎯 time_spec: {time_spec}")
debug_print(f" 🌪️ stir_speed: {stir_speed} RPM")
debug_print(f" ⏱️ settling_time: {settling_time}")
debug_print("🌪️" * 20)
# 📋 参数验证
debug_print("📍 步骤1: 参数验证... 🔧")
if not vessel_id: # 🔧 使用 vessel_id
debug_print("❌ vessel 参数不能为空! 😱")
raise ValueError("vessel 参数不能为空")
if vessel_id not in G.nodes():
if vessel_id not in G.nodes(): # 🔧 使用 vessel_id
debug_print(f"❌ 容器 '{vessel_id}' 不存在于系统中! 😞")
raise ValueError(f"容器 '{vessel_id}' 不存在于系统中")
# 参数解析 — 确定实际时间优先级time_spec > stir_time > time
debug_print("✅ 基础参数验证通过 🎯")
# 🔄 参数解析
debug_print("📍 步骤2: 参数解析... ⚡")
# 确定实际时间优先级time_spec > stir_time > time
if time_spec:
parsed_time = parse_time_input(time_spec)
debug_print(f"🎯 使用time_spec: '{time_spec}'{parsed_time}s")
elif stir_time not in ["0", 0, 0.0]:
parsed_time = parse_time_input(stir_time)
debug_print(f"🎯 使用stir_time: {stir_time}{parsed_time}s")
else:
parsed_time = parse_time_input(time)
debug_print(f"🎯 使用time: {time}{parsed_time}s")
# 解析沉降时间
parsed_settling_time = parse_time_input(settling_time)
# 模拟运行时间优化
# 🕐 模拟运行时间优化
debug_print(" ⏱️ 检查模拟运行时间限制...")
original_stir_time = parsed_time
original_settling_time = parsed_settling_time
# 搅拌时间限制为60秒
stir_time_limit = 60.0
if parsed_time > stir_time_limit:
parsed_time = stir_time_limit
debug_print(f" 🎮 搅拌时间优化: {original_stir_time}s → {parsed_time}s ⚡")
# 沉降时间限制为30秒
settling_time_limit = 30.0
if parsed_settling_time > settling_time_limit:
parsed_settling_time = settling_time_limit
debug_print(f" 🎮 沉降时间优化: {original_settling_time}s → {parsed_settling_time}s ⚡")
# 参数修正
parsed_time, stir_speed, parsed_settling_time = validate_and_fix_params(
parsed_time, stir_speed, parsed_settling_time
)
debug_print(f"最终参数: time={parsed_time}s, speed={stir_speed}RPM, settling={parsed_settling_time}s")
# 查找设备
debug_print(f"🎯 最终参数: time={parsed_time}s, speed={stir_speed}RPM, settling={parsed_settling_time}s")
# 🔍 查找设备
debug_print("📍 步骤3: 查找搅拌设备... 🔍")
try:
stirrer_id = find_connected_stirrer(G, vessel_id)
stirrer_id = find_connected_stirrer(G, vessel_id) # 🔧 使用 vessel_id
debug_print(f"🎉 使用搅拌设备: {stirrer_id}")
except Exception as e:
debug_print(f"❌ 设备查找失败: {str(e)} 😭")
raise ValueError(f"无法找到搅拌设备: {str(e)}")
# 生成动作
# 🚀 生成动作
debug_print("📍 步骤4: 生成搅拌动作... 🌪️")
action_sequence = []
stir_action = {
"device_id": stirrer_id,
"action_name": "stir",
"action_kwargs": {
"vessel": {"id": vessel_id},
# 🔧 关键修复传递vessel_id字符串而不是完整的Resource对象
"vessel": {"id": vessel_id}, # 传递字符串ID不是Resource对象
"time": str(time),
"event": event,
"time_spec": time_spec,
@@ -134,14 +244,22 @@ def generate_stir_protocol(
}
}
action_sequence.append(stir_action)
# 时间优化信息
debug_print("✅ 搅拌动作已添加 🌪️✨")
# 显示时间优化信息
if original_stir_time != parsed_time or original_settling_time != parsed_settling_time:
debug_print(f"模拟优化: 搅拌 {original_stir_time/60:.1f}min→{parsed_time/60:.1f}min, "
f"沉降 {original_settling_time/60:.1f}min→{parsed_settling_time/60:.1f}min")
debug_print(f"搅拌协议生成完成: {vessel_display}, {stir_speed}RPM, "
f"{parsed_time}s, 沉降{parsed_settling_time}s, 总{(parsed_time + parsed_settling_time)/60:.1f}min")
debug_print(f" 🎭 模拟优化说明:")
debug_print(f" 搅拌时间: {original_stir_time/60:.1f}分钟 → {parsed_time/60:.1f}分钟")
debug_print(f" 沉降时间: {original_settling_time/60:.1f}分钟 → {parsed_settling_time/60:.1f}分钟")
# 🎊 总结
debug_print("🎊" * 20)
debug_print(f"🎉 搅拌协议生成完成! ✨")
debug_print(f"📊 总动作数: {len(action_sequence)}")
debug_print(f"🥽 搅拌容器: {vessel_display}")
debug_print(f"🌪️ 搅拌参数: {stir_speed} RPM, {parsed_time}s, 沉降 {parsed_settling_time}s")
debug_print(f"⏱️ 预计总时间: {(parsed_time + parsed_settling_time)/60:.1f} 分钟 ⌛")
debug_print("🎊" * 20)
return action_sequence
@@ -154,13 +272,16 @@ def generate_start_stir_protocol(
) -> List[Dict[str, Any]]:
"""生成开始搅拌操作的协议序列 - 修复vessel参数传递"""
# 🔧 核心修改正确处理vessel参数
vessel_id = extract_vessel_id(vessel)
vessel_display = get_vessel_display_info(vessel)
# 确保vessel_resource是完整的Resource对象
# 🔧 关键修复:确保vessel_resource是完整的Resource对象
if isinstance(vessel, dict):
vessel_resource = vessel
vessel_resource = vessel # 已经是完整的Resource字典
debug_print(f"✅ 使用传入的vessel Resource对象")
else:
# 如果只是字符串构建一个基本的Resource对象
vessel_resource = {
"id": vessel,
"name": "",
@@ -176,29 +297,39 @@ def generate_start_stir_protocol(
"sample_id": "",
"type": ""
}
debug_print(f"🔧 构建了基本的vessel Resource对象: {vessel}")
debug_print("🔄 开始生成启动搅拌协议修复vessel参数")
debug_print(f"🥽 vessel: {vessel_display} (ID: {vessel_id})")
debug_print(f"🌪️ speed: {stir_speed} RPM")
debug_print(f"🎯 purpose: {purpose}")
# 基础验证
if not vessel_id or vessel_id not in G.nodes():
debug_print("❌ 容器验证失败!")
raise ValueError("vessel 参数无效")
# 参数修正
if stir_speed < 10.0 or stir_speed > 1500.0:
debug_print(f"⚠️ 搅拌速度修正: {stir_speed} → 300 RPM 🌪️")
stir_speed = 300.0
# 查找设备
stirrer_id = find_connected_stirrer(G, vessel_id)
# 🔧 关键修复传递vessel_id字符串
action_sequence = [{
"device_id": stirrer_id,
"action_name": "start_stir",
"action_kwargs": {
"vessel": {"id": vessel_id},
# 🔧 关键修复传递vessel_id字符串而不是完整的Resource对象
"vessel": {"id": vessel_id}, # 传递字符串ID不是Resource对象
"stir_speed": stir_speed,
"purpose": purpose or f"启动搅拌 {stir_speed} RPM"
}
}]
debug_print(f"启动搅拌协议: {vessel_display}, {stir_speed}RPM, device={stirrer_id}")
debug_print(f"启动搅拌协议生成完成 🎯")
return action_sequence
def generate_stop_stir_protocol(
@@ -208,13 +339,16 @@ def generate_stop_stir_protocol(
) -> List[Dict[str, Any]]:
"""生成停止搅拌操作的协议序列 - 修复vessel参数传递"""
# 🔧 核心修改正确处理vessel参数
vessel_id = extract_vessel_id(vessel)
vessel_display = get_vessel_display_info(vessel)
# 确保vessel_resource是完整的Resource对象
# 🔧 关键修复:确保vessel_resource是完整的Resource对象
if isinstance(vessel, dict):
vessel_resource = vessel
vessel_resource = vessel # 已经是完整的Resource字典
debug_print(f"✅ 使用传入的vessel Resource对象")
else:
# 如果只是字符串构建一个基本的Resource对象
vessel_resource = {
"id": vessel,
"name": "",
@@ -230,103 +364,115 @@ def generate_stop_stir_protocol(
"sample_id": "",
"type": ""
}
debug_print(f"🔧 构建了基本的vessel Resource对象: {vessel}")
debug_print("🛑 开始生成停止搅拌协议修复vessel参数")
debug_print(f"🥽 vessel: {vessel_display} (ID: {vessel_id})")
# 基础验证
if not vessel_id or vessel_id not in G.nodes():
debug_print("❌ 容器验证失败!")
raise ValueError("vessel 参数无效")
# 查找设备
stirrer_id = find_connected_stirrer(G, vessel_id)
# 🔧 关键修复传递vessel_id字符串
action_sequence = [{
"device_id": stirrer_id,
"action_name": "stop_stir",
"action_kwargs": {
"vessel": {"id": vessel_id},
# 🔧 关键修复传递vessel_id字符串而不是完整的Resource对象
"vessel": {"id": vessel_id}, # 传递字符串ID不是Resource对象
}
}]
debug_print(f"停止搅拌协议: {vessel_display}, device={stirrer_id}")
debug_print(f"停止搅拌协议生成完成 🎯")
return action_sequence
# 便捷函数
# 🔧 新增:便捷函数
def stir_briefly(G: nx.DiGraph, vessel: Union[str, dict],
speed: float = 300.0) -> List[Dict[str, Any]]:
"""短时间搅拌30秒"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"短时间搅拌: {vessel_display} @ {speed}RPM (30s)")
debug_print(f"短时间搅拌: {vessel_display} @ {speed}RPM (30s)")
return generate_stir_protocol(G, vessel, time="30", stir_speed=speed)
def stir_slowly(G: nx.DiGraph, vessel: Union[str, dict],
time: Union[str, float] = "10 min") -> List[Dict[str, Any]]:
"""慢速搅拌"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"慢速搅拌: {vessel_display} @ 150RPM")
debug_print(f"🐌 慢速搅拌: {vessel_display} @ 150RPM")
return generate_stir_protocol(G, vessel, time=time, stir_speed=150.0)
def stir_vigorously(G: nx.DiGraph, vessel: Union[str, dict],
time: Union[str, float] = "5 min") -> List[Dict[str, Any]]:
"""剧烈搅拌"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"剧烈搅拌: {vessel_display} @ 800RPM")
debug_print(f"💨 剧烈搅拌: {vessel_display} @ 800RPM")
return generate_stir_protocol(G, vessel, time=time, stir_speed=800.0)
def stir_for_reaction(G: nx.DiGraph, vessel: Union[str, dict],
time: Union[str, float] = "1 h") -> List[Dict[str, Any]]:
"""反应搅拌(标准速度,长时间)"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"反应搅拌: {vessel_display} @ 400RPM")
debug_print(f"🧪 反应搅拌: {vessel_display} @ 400RPM")
return generate_stir_protocol(G, vessel, time=time, stir_speed=400.0)
def stir_for_dissolution(G: nx.DiGraph, vessel: Union[str, dict],
time: Union[str, float] = "15 min") -> List[Dict[str, Any]]:
"""溶解搅拌(中等速度)"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"溶解搅拌: {vessel_display} @ 500RPM")
debug_print(f"💧 溶解搅拌: {vessel_display} @ 500RPM")
return generate_stir_protocol(G, vessel, time=time, stir_speed=500.0)
def stir_gently(G: nx.DiGraph, vessel: Union[str, dict],
time: Union[str, float] = "30 min") -> List[Dict[str, Any]]:
"""温和搅拌"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"温和搅拌: {vessel_display} @ 200RPM")
debug_print(f"🍃 温和搅拌: {vessel_display} @ 200RPM")
return generate_stir_protocol(G, vessel, time=time, stir_speed=200.0)
def stir_overnight(G: nx.DiGraph, vessel: Union[str, dict]) -> List[Dict[str, Any]]:
"""过夜搅拌模拟时缩短为2小时"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"过夜搅拌模拟2小时: {vessel_display} @ 300RPM")
debug_print(f"🌙 过夜搅拌模拟2小时: {vessel_display} @ 300RPM")
return generate_stir_protocol(G, vessel, time="2 h", stir_speed=300.0)
def start_continuous_stirring(G: nx.DiGraph, vessel: Union[str, dict],
speed: float = 300.0, purpose: str = "continuous stirring") -> List[Dict[str, Any]]:
"""开始连续搅拌"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"开始连续搅拌: {vessel_display} @ {speed}RPM")
debug_print(f"🔄 开始连续搅拌: {vessel_display} @ {speed}RPM")
return generate_start_stir_protocol(G, vessel, stir_speed=speed, purpose=purpose)
def stop_all_stirring(G: nx.DiGraph, vessel: Union[str, dict]) -> List[Dict[str, Any]]:
"""停止所有搅拌"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"停止搅拌: {vessel_display}")
debug_print(f"🛑 停止搅拌: {vessel_display}")
return generate_stop_stir_protocol(G, vessel)
# 测试函数
def test_stir_protocol():
"""测试搅拌协议"""
debug_print("🧪 === STIR PROTOCOL 测试 === ✨")
# 测试vessel参数处理
debug_print("🔧 测试vessel参数处理...")
# 测试字典格式
vessel_dict = {"id": "flask_1", "name": "反应瓶1"}
vessel_id = extract_vessel_id(vessel_dict)
vessel_display = get_vessel_display_info(vessel_dict)
debug_print(f"字典格式: {vessel_dict} -> ID: {vessel_id}, 显示: {vessel_display}")
debug_print(f" 字典格式: {vessel_dict} ID: {vessel_id}, 显示: {vessel_display}")
# 测试字符串格式
vessel_str = "flask_2"
vessel_id = extract_vessel_id(vessel_str)
vessel_display = get_vessel_display_info(vessel_str)
debug_print(f"字符串格式: {vessel_str} -> ID: {vessel_id}, 显示: {vessel_display}")
debug_print("测试完成")
debug_print(f" 字符串格式: {vessel_str} ID: {vessel_id}, 显示: {vessel_display}")
debug_print("测试完成 🎉")
if __name__ == "__main__":
test_stir_protocol()

View File

@@ -1,57 +1,36 @@
"""编译器共享日志工具"""
import inspect
# 🆕 创建进度日志动作
import logging
from typing import Dict, Any
# 模块名到前缀的映射
_MODULE_PREFIXES = {
"add_protocol": "[ADD]",
"adjustph_protocol": "[ADJUSTPH]",
"clean_vessel_protocol": "[CLEAN_VESSEL]",
"dissolve_protocol": "[DISSOLVE]",
"dry_protocol": "[DRY]",
"evacuateandrefill_protocol": "[EVACUATE]",
"evaporate_protocol": "[EVAPORATE]",
"filter_protocol": "[FILTER]",
"heatchill_protocol": "[HEATCHILL]",
"hydrogenate_protocol": "[HYDROGENATE]",
"pump_protocol": "[PUMP]",
"recrystallize_protocol": "[RECRYSTALLIZE]",
"reset_handling_protocol": "[RESET]",
"run_column_protocol": "[RUN_COLUMN]",
"separate_protocol": "[SEPARATE]",
"stir_protocol": "[STIR]",
"wash_solid_protocol": "[WASH_SOLID]",
"vessel_parser": "[VESSEL_PARSER]",
"unit_parser": "[UNIT_PARSER]",
"resource_helper": "[RESOURCE_HELPER]",
}
logger = logging.getLogger(__name__)
def debug_print(message, prefix=None):
"""调试输出 — 自动根据调用模块设置前缀"""
if prefix is None:
frame = inspect.currentframe()
caller = frame.f_back if frame else None
module_name = ""
if caller:
module_name = caller.f_globals.get("__name__", "")
# 取最后一段作为模块短名
module_name = module_name.rsplit(".", 1)[-1]
prefix = _MODULE_PREFIXES.get(module_name, f"[{module_name.upper()}]")
logger = logging.getLogger("unilabos.compile")
def debug_print(message, prefix="[UNIT_PARSER]"):
"""调试输出"""
logger.info(f"{prefix} {message}")
def action_log(message: str, emoji: str = "📝", prefix="[HIGH-LEVEL OPERATION]") -> Dict[str, Any]:
"""创建一个动作日志"""
full_message = f"{prefix} {emoji} {message}"
return {
"action_name": "wait",
"action_kwargs": {
"time": 0.1,
"log_message": full_message,
"progress_message": full_message
"""创建一个动作日志 - 支持中文和emoji"""
try:
full_message = f"{prefix} {emoji} {message}"
return {
"action_name": "wait",
"action_kwargs": {
"time": 0.1,
"log_message": full_message,
"progress_message": full_message
}
}
}
except Exception as e:
# 如果emoji有问题使用纯文本
safe_message = f"{prefix} {message}"
return {
"action_name": "wait",
"action_kwargs": {
"time": 0.1,
"log_message": safe_message,
"progress_message": safe_message
}
}

View File

@@ -1,172 +0,0 @@
"""
资源实例兼容层
提供 ensure_resource_instance() 将 dict / ResourceDictInstance 统一转为
ResourceDictInstance使编译器可以渐进式迁移到强类型资源。
"""
from typing import Any, Dict, Optional, Union
from unilabos.resources.resource_tracker import ResourceDictInstance
def ensure_resource_instance(
resource: Union[Dict[str, Any], ResourceDictInstance, None],
) -> Optional[ResourceDictInstance]:
"""将 dict 或 ResourceDictInstance 统一转为 ResourceDictInstance
编译器入口统一调用此函数,即可同时兼容旧 dict 传参和新 ResourceDictInstance 传参。
Args:
resource: 资源数据,可以是 plain dict、ResourceDictInstance 或 None
Returns:
ResourceDictInstance 或 None当输入为 None 时)
"""
if resource is None:
return None
if isinstance(resource, ResourceDictInstance):
return resource
if isinstance(resource, dict):
return ResourceDictInstance.get_resource_instance_from_dict(resource)
raise TypeError(f"不支持的资源类型: {type(resource)}, 期望 dict 或 ResourceDictInstance")
def resource_to_dict(resource: Union[Dict[str, Any], ResourceDictInstance]) -> Dict[str, Any]:
"""将 ResourceDictInstance 或 dict 统一转为 plain dict
用于需要 dict 操作的场景(如 children dict 操作)。
Args:
resource: ResourceDictInstance 或 dict
Returns:
plain dict
"""
if isinstance(resource, dict):
return resource
if isinstance(resource, ResourceDictInstance):
return resource.get_plr_nested_dict()
raise TypeError(f"不支持的资源类型: {type(resource)}")
def get_resource_id(resource: Union[str, Dict[str, Any], ResourceDictInstance]) -> str:
"""从资源对象中提取 ID
Args:
resource: 字符串 ID、dict 或 ResourceDictInstance
Returns:
资源 ID 字符串
"""
if isinstance(resource, str):
return resource
if isinstance(resource, ResourceDictInstance):
return resource.res_content.id
if isinstance(resource, dict):
if "id" in resource:
return resource["id"]
# 兼容 {station_id: {...}} 格式
first_val = next(iter(resource.values()), {})
if isinstance(first_val, dict):
return first_val.get("id", "")
return ""
raise TypeError(f"不支持的资源类型: {type(resource)}")
def get_resource_data(resource: Union[str, Dict[str, Any], ResourceDictInstance]) -> Dict[str, Any]:
"""从资源对象中提取 data 字段
Args:
resource: 字符串、dict 或 ResourceDictInstance
Returns:
data 字典
"""
if isinstance(resource, str):
return {}
if isinstance(resource, ResourceDictInstance):
return dict(resource.res_content.data)
if isinstance(resource, dict):
return resource.get("data", {})
return {}
def get_resource_display_info(resource: Union[str, Dict[str, Any], ResourceDictInstance]) -> str:
"""获取资源的显示信息(用于日志)
Args:
resource: 字符串 ID、dict 或 ResourceDictInstance
Returns:
显示信息字符串
"""
if isinstance(resource, str):
return resource
if isinstance(resource, ResourceDictInstance):
res = resource.res_content
return f"{res.id} ({res.name})" if res.name and res.name != res.id else res.id
if isinstance(resource, dict):
res_id = resource.get("id", "unknown")
res_name = resource.get("name", "")
if res_name and res_name != res_id:
return f"{res_id} ({res_name})"
return res_id
return str(resource)
def get_resource_liquid_volume(resource: Union[Dict[str, Any], ResourceDictInstance]) -> float:
"""从资源中获取液体体积
Args:
resource: dict 或 ResourceDictInstance
Returns:
液体总体积 (mL)
"""
data = get_resource_data(resource)
liquids = data.get("liquid", [])
if isinstance(liquids, list):
return sum(l.get("volume", 0.0) for l in liquids if isinstance(l, dict))
return 0.0
def update_vessel_volume(vessel, G, new_volume: float, description: str = "") -> None:
"""
更新容器体积同时更新vessel字典和图节点
Args:
vessel: 容器字典或 ResourceDictInstance
G: 网络图 (nx.DiGraph)
new_volume: 新体积 (mL)
description: 更新描述(用于日志)
"""
import logging
logger = logging.getLogger("unilabos.compile")
vessel_id = get_resource_id(vessel)
if description:
logger.info(f"[RESOURCE] 更新容器体积 - {description}")
# 更新 vessel 字典中的体积
if isinstance(vessel, dict):
if "data" not in vessel:
vessel["data"] = {}
lv = vessel["data"].get("liquid_volume")
if isinstance(lv, list) and len(lv) > 0:
vessel["data"]["liquid_volume"][0] = new_volume
else:
vessel["data"]["liquid_volume"] = new_volume
# 同时更新图中的容器数据
if vessel_id and vessel_id in G.nodes():
if "data" not in G.nodes[vessel_id]:
G.nodes[vessel_id]["data"] = {}
node_lv = G.nodes[vessel_id]["data"].get("liquid_volume")
if isinstance(node_lv, list) and len(node_lv) > 0:
G.nodes[vessel_id]["data"]["liquid_volume"][0] = new_volume
else:
G.nodes[vessel_id]["data"]["liquid_volume"] = new_volume
logger.info(f"[RESOURCE] 容器 '{vessel_id}' 体积已更新为: {new_volume:.2f}mL")

View File

@@ -184,42 +184,6 @@ def parse_time_input(time_input: Union[str, float]) -> float:
return time_sec
def parse_temperature_input(temp_input: Union[str, float], default_temp: float = 25.0) -> float:
"""
解析温度输入,支持字符串和数值
Args:
temp_input: 温度输入(如 "256 °C", "reflux", 45.0
default_temp: 默认温度
Returns:
float: 温度°C
"""
if not temp_input:
return default_temp
if isinstance(temp_input, (int, float)):
return float(temp_input)
temp_str = str(temp_input).lower().strip()
# 特殊温度关键词
special_temps = {
"room temperature": 25.0, "reflux": 78.0, "ice bath": 0.0,
"boiling": 100.0, "hot": 60.0, "warm": 40.0, "cold": 10.0,
}
if temp_str in special_temps:
return special_temps[temp_str]
# 正则解析(如 "256 °C", "45°C", "45"
match = re.search(r'(\d+(?:\.\d+)?)\s*°?[cf]?', temp_str)
if match:
return float(match.group(1))
debug_print(f"无法解析温度: '{temp_str}',使用默认值: {default_temp}°C")
return default_temp
# 测试函数
def test_unit_parser():
"""测试单位解析功能"""

View File

@@ -1,23 +1,27 @@
import networkx as nx
from .logger_util import debug_print
from .resource_helper import get_resource_id, get_resource_data
def get_vessel(vessel):
"""
统一处理vessel参数返回vessel_id和vessel_data。
支持 dict、str、ResourceDictInstance。
Args:
vessel: 可以是一个字典字符串或 ResourceDictInstance表示vessel的ID或数据。
vessel: 可以是一个字典字符串表示vessel的ID或数据。
Returns:
tuple: 包含vessel_id和vessel_data。
"""
# 统一使用 resource_helper 处理
vessel_id = get_resource_id(vessel)
vessel_data = get_resource_data(vessel)
if isinstance(vessel, dict):
if "id" not in vessel:
vessel_id = list(vessel.values())[0].get("id", "")
else:
vessel_id = vessel.get("id", "")
vessel_data = vessel.get("data", {})
else:
vessel_id = str(vessel)
vessel_data = {}
return vessel_id, vessel_data
@@ -274,31 +278,4 @@ def find_solid_dispenser(G: nx.DiGraph) -> str:
return node
debug_print(f"❌ 未找到固体加样器")
return ""
def find_connected_heatchill(G: nx.DiGraph, vessel: str) -> str:
"""查找与指定容器相连的加热/冷却设备"""
heatchill_nodes = []
for node in G.nodes():
node_data = G.nodes[node]
node_class = node_data.get('class', '') or ''
node_name = node.lower()
if ('heatchill' in node_class.lower() or 'virtual_heatchill' in node_class
or 'heater' in node_name or 'heat' in node_name):
heatchill_nodes.append(node)
# 检查连接
if vessel and heatchill_nodes:
for hc in heatchill_nodes:
if G.has_edge(hc, vessel) or G.has_edge(vessel, hc):
debug_print(f"加热设备 '{hc}' 与容器 '{vessel}' 相连")
return hc
# 使用第一个可用设备
if heatchill_nodes:
debug_print(f"使用第一个加热设备: {heatchill_nodes[0]}")
return heatchill_nodes[0]
debug_print("未找到加热设备,使用默认设备")
return "heatchill_1"
return ""

View File

@@ -4,55 +4,199 @@ import logging
import re
from .utils.unit_parser import parse_time_input, parse_volume_input
from .utils.resource_helper import get_resource_id, get_resource_display_info, get_resource_liquid_volume, update_vessel_volume
from .utils.logger_util import debug_print
logger = logging.getLogger(__name__)
def debug_print(message):
"""调试输出"""
logger.info(f"[WASH_SOLID] {message}")
def find_solvent_source(G: nx.DiGraph, solvent: str) -> str:
"""查找溶剂源"""
"""查找溶剂源(精简版)"""
debug_print(f"🔍 查找溶剂源: {solvent}")
# 简化搜索列表
search_patterns = [
f"flask_{solvent}", f"bottle_{solvent}", f"reagent_{solvent}",
"liquid_reagent_bottle_1", "flask_1", "solvent_bottle"
]
for pattern in search_patterns:
if pattern in G.nodes():
debug_print(f"找到溶剂源: {pattern}")
debug_print(f"🎉 找到溶剂源: {pattern}")
return pattern
debug_print(f"使用默认溶剂源: flask_{solvent}")
debug_print(f"⚠️ 使用默认溶剂源: flask_{solvent}")
return f"flask_{solvent}"
def find_filtrate_vessel(G: nx.DiGraph, filtrate_vessel: str = "") -> str:
"""查找滤液容器"""
"""查找滤液容器(精简版)"""
debug_print(f"🔍 查找滤液容器: {filtrate_vessel}")
# 如果指定了且存在,直接使用
if filtrate_vessel and filtrate_vessel in G.nodes():
debug_print(f"✅ 使用指定容器: {filtrate_vessel}")
return filtrate_vessel
# 简化搜索列表
default_vessels = ["waste_workup", "filtrate_vessel", "flask_1", "collection_bottle_1"]
for vessel in default_vessels:
if vessel in G.nodes():
debug_print(f"找到滤液容器: {vessel}")
debug_print(f"🎉 找到滤液容器: {vessel}")
return vessel
debug_print(f"⚠️ 使用默认滤液容器: waste_workup")
return "waste_workup"
def extract_vessel_id(vessel) -> str:
"""从vessel参数中提取vessel_id兼容 str / dict / ResourceDictInstance"""
return get_resource_id(vessel)
def extract_vessel_id(vessel: Union[str, dict]) -> str:
"""
从vessel参数中提取vessel_id
Args:
vessel: vessel字典或vessel_id字符串
Returns:
str: vessel_id
"""
if isinstance(vessel, dict):
vessel_id = list(vessel.values())[0].get("id", "")
debug_print(f"🔧 从vessel字典提取ID: {vessel_id}")
return vessel_id
elif isinstance(vessel, str):
debug_print(f"🔧 vessel参数为字符串: {vessel}")
return vessel
else:
debug_print(f"⚠️ 无效的vessel参数类型: {type(vessel)}")
return ""
def get_vessel_display_info(vessel) -> str:
"""获取容器的显示信息(用于日志),兼容 str / dict / ResourceDictInstance"""
return get_resource_display_info(vessel)
def get_vessel_display_info(vessel: Union[str, dict]) -> str:
"""
获取容器的显示信息(用于日志)
Args:
vessel: vessel字典或vessel_id字符串
Returns:
str: 显示信息
"""
if isinstance(vessel, dict):
vessel_id = vessel.get("id", "unknown")
vessel_name = vessel.get("name", "")
if vessel_name:
return f"{vessel_id} ({vessel_name})"
else:
return vessel_id
else:
return str(vessel)
def get_vessel_liquid_volume(vessel: dict) -> float:
"""
获取容器中的液体体积 - 支持vessel字典
Args:
vessel: 容器字典
Returns:
float: 液体体积mL
"""
if not vessel or "data" not in vessel:
debug_print(f"⚠️ 容器数据为空,返回 0.0mL")
return 0.0
vessel_data = vessel["data"]
vessel_id = vessel.get("id", "unknown")
debug_print(f"🔍 读取容器 '{vessel_id}' 体积数据: {vessel_data}")
# 检查liquid_volume字段
if "liquid_volume" in vessel_data:
liquid_volume = vessel_data["liquid_volume"]
# 处理列表格式
if isinstance(liquid_volume, list):
if len(liquid_volume) > 0:
volume = liquid_volume[0]
if isinstance(volume, (int, float)):
debug_print(f"✅ 容器 '{vessel_id}' 体积: {volume}mL (列表格式)")
return float(volume)
# 处理直接数值格式
elif isinstance(liquid_volume, (int, float)):
debug_print(f"✅ 容器 '{vessel_id}' 体积: {liquid_volume}mL (数值格式)")
return float(liquid_volume)
# 检查其他可能的体积字段
volume_keys = ['current_volume', 'total_volume', 'volume']
for key in volume_keys:
if key in vessel_data:
try:
volume = float(vessel_data[key])
if volume > 0:
debug_print(f"✅ 容器 '{vessel_id}' 体积: {volume}mL (字段: {key})")
return volume
except (ValueError, TypeError):
continue
debug_print(f"⚠️ 无法获取容器 '{vessel_id}' 的体积,返回默认值 0.0mL")
return 0.0
def update_vessel_volume(vessel: dict, G: nx.DiGraph, new_volume: float, description: str = "") -> None:
"""
更新容器体积同时更新vessel字典和图节点
Args:
vessel: 容器字典
G: 网络图
new_volume: 新体积
description: 更新描述
"""
vessel_id = vessel.get("id", "unknown")
if description:
debug_print(f"🔧 更新容器体积 - {description}")
# 更新vessel字典中的体积
if "data" in vessel:
if "liquid_volume" in vessel["data"]:
current_volume = vessel["data"]["liquid_volume"]
if isinstance(current_volume, list):
if len(current_volume) > 0:
vessel["data"]["liquid_volume"][0] = new_volume
else:
vessel["data"]["liquid_volume"] = [new_volume]
else:
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"]["liquid_volume"] = new_volume
else:
vessel["data"] = {"liquid_volume": new_volume}
# 同时更新图中的容器数据
if vessel_id in G.nodes():
if 'data' not in G.nodes[vessel_id]:
G.nodes[vessel_id]['data'] = {}
vessel_node_data = G.nodes[vessel_id]['data']
current_node_volume = vessel_node_data.get('liquid_volume', 0.0)
if isinstance(current_node_volume, list):
if len(current_node_volume) > 0:
G.nodes[vessel_id]['data']['liquid_volume'][0] = new_volume
else:
G.nodes[vessel_id]['data']['liquid_volume'] = [new_volume]
else:
G.nodes[vessel_id]['data']['liquid_volume'] = new_volume
debug_print(f"📊 容器 '{vessel_id}' 体积已更新为: {new_volume:.2f}mL")
def generate_wash_solid_protocol(
G: nx.DiGraph,
vessel: Union[str, dict],
vessel: Union[str, dict], # 🔧 修改支持vessel字典
solvent: str,
volume: Union[float, str] = "50",
filtrate_vessel: Union[str, dict] = "",
filtrate_vessel: Union[str, dict] = "", # 🔧 修改支持vessel字典
temp: float = 25.0,
stir: bool = False,
stir_speed: float = 0.0,
@@ -66,7 +210,7 @@ def generate_wash_solid_protocol(
) -> List[Dict[str, Any]]:
"""
生成固体清洗协议 - 支持vessel字典和体积运算
Args:
G: 有向图,节点为设备和容器,边为流体管道
vessel: 清洗容器字典从XDL传入或容器ID字符串
@@ -83,78 +227,106 @@ def generate_wash_solid_protocol(
mass: 固体质量(用于计算溶剂用量)
event: 事件描述
**kwargs: 其他可选参数
Returns:
List[Dict[str, Any]]: 固体清洗操作的动作序列
"""
# 🔧 核心修改从vessel参数中提取vessel_id
vessel_id = extract_vessel_id(vessel)
vessel_display = get_vessel_display_info(vessel)
# 🔧 处理filtrate_vessel参数
filtrate_vessel_id = extract_vessel_id(filtrate_vessel) if filtrate_vessel else ""
debug_print(f"开始生成固体清洗协议: vessel={vessel_id}, solvent={solvent}, volume={volume}, repeats={repeats}")
# 记录清洗前的容器状态
debug_print("🧼" * 20)
debug_print("🚀 开始生成固体清洗协议支持vessel字典和体积运算")
debug_print(f"📝 输入参数:")
debug_print(f" 🥽 vessel: {vessel_display} (ID: {vessel_id})")
debug_print(f" 🧪 solvent: {solvent}")
debug_print(f" 💧 volume: {volume}")
debug_print(f" 🗑️ filtrate_vessel: {filtrate_vessel_id}")
debug_print(f" ⏰ time: {time}")
debug_print(f" 🔄 repeats: {repeats}")
debug_print("🧼" * 20)
# 🔧 新增:记录清洗前的容器状态
debug_print("🔍 记录清洗前容器状态...")
if isinstance(vessel, dict):
original_volume = get_resource_liquid_volume(vessel)
original_volume = get_vessel_liquid_volume(vessel)
debug_print(f"📊 清洗前液体体积: {original_volume:.2f}mL")
else:
original_volume = 0.0
# 快速验证
if not vessel_id or vessel_id not in G.nodes():
debug_print(f"📊 vessel为字符串格式无法获取体积信息")
# 📋 快速验证
if not vessel_id or vessel_id not in G.nodes(): # 🔧 使用 vessel_id
debug_print("❌ 容器验证失败! 😱")
raise ValueError("vessel 参数无效")
if not solvent:
debug_print("❌ 溶剂不能为空! 😱")
raise ValueError("solvent 参数不能为空")
# 参数解析
debug_print("✅ 基础验证通过 🎯")
# 🔄 参数解析
debug_print("📍 步骤1: 参数解析... ⚡")
final_volume = parse_volume_input(volume, volume_spec, mass)
final_time = parse_time_input(time)
# 重复次数处理
# 重复次数处理(简化)
if repeats_spec:
spec_map = {'few': 2, 'several': 3, 'many': 4, 'thorough': 5}
final_repeats = next((v for k, v in spec_map.items() if k in repeats_spec.lower()), repeats)
else:
final_repeats = max(1, min(repeats, 5))
# 模拟时间优化
final_repeats = max(1, min(repeats, 5)) # 限制1-5次
# 🕐 模拟时间优化
debug_print(" ⏱️ 模拟时间优化...")
original_time = final_time
if final_time > 60.0:
final_time = 60.0
debug_print(f"时间优化: {original_time}s -> {final_time}s")
final_time = 60.0 # 限制最长60秒
debug_print(f" 🎮 时间优化: {original_time}s {final_time}s")
# 参数修正
temp = max(25.0, min(temp, 80.0))
stir_speed = max(0.0, min(stir_speed, 300.0)) if stir else 0.0
debug_print(f"最终参数: 体积={final_volume}mL, 时间={final_time}s, 重复={final_repeats}")
# 查找设备
temp = max(25.0, min(temp, 80.0)) # 温度范围25-80°C
stir_speed = max(0.0, min(stir_speed, 300.0)) if stir else 0.0 # 速度范围0-300
debug_print(f"🎯 最终参数: 体积={final_volume}mL, 时间={final_time}s, 重复={final_repeats}")
# 🔍 查找设备
debug_print("📍 步骤2: 查找设备... 🔍")
try:
solvent_source = find_solvent_source(G, solvent)
actual_filtrate_vessel = find_filtrate_vessel(G, filtrate_vessel_id)
debug_print(f"🎉 设备配置完成 ✨")
debug_print(f" 🧪 溶剂源: {solvent_source}")
debug_print(f" 🗑️ 滤液容器: {actual_filtrate_vessel}")
except Exception as e:
debug_print(f"❌ 设备查找失败: {str(e)} 😭")
raise ValueError(f"设备查找失败: {str(e)}")
# 生成动作序列
# 🚀 生成动作序列
debug_print("📍 步骤3: 生成清洗动作... 🧼")
action_sequence = []
# 🔧 新增:体积变化跟踪变量
current_volume = original_volume
total_solvent_used = 0.0
for cycle in range(final_repeats):
debug_print(f"{cycle+1}/{final_repeats}次清洗")
debug_print(f" 🔄 {cycle+1}/{final_repeats}次清洗...")
# 1. 转移溶剂
try:
from .pump_protocol import generate_pump_protocol_with_rinsing
debug_print(f" 💧 添加溶剂: {final_volume}mL {solvent}")
transfer_actions = generate_pump_protocol_with_rinsing(
G=G,
from_vessel=solvent_source,
to_vessel=vessel_id,
to_vessel=vessel_id, # 🔧 使用 vessel_id
volume=final_volume,
amount="",
time=0.0,
@@ -166,160 +338,211 @@ def generate_wash_solid_protocol(
flowrate=2.5,
transfer_flowrate=0.5
)
if transfer_actions:
action_sequence.extend(transfer_actions)
debug_print(f" ✅ 转移动作: {len(transfer_actions)}个 🚚")
# 🔧 新增:更新体积 - 添加溶剂后
current_volume += final_volume
total_solvent_used += final_volume
if isinstance(vessel, dict):
update_vessel_volume(vessel, G, current_volume,
update_vessel_volume(vessel, G, current_volume,
f"{cycle+1}次清洗添加{final_volume}mL溶剂后")
except Exception as e:
debug_print(f"转移失败: {str(e)}")
debug_print(f"转移失败: {str(e)} 😞")
# 2. 搅拌(如果需要)
if stir and final_time > 0:
debug_print(f" 🌪️ 搅拌: {final_time}s @ {stir_speed}RPM")
stir_action = {
"device_id": "stirrer_1",
"action_name": "stir",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
"time": str(time),
"stir_time": final_time,
"stir_speed": stir_speed,
"settling_time": 10.0
"settling_time": 10.0 # 🕐 缩短沉降时间
}
}
action_sequence.append(stir_action)
debug_print(f" ✅ 搅拌动作: {final_time}s, {stir_speed}RPM 🌪️")
# 3. 过滤
debug_print(f" 🌊 过滤到: {actual_filtrate_vessel}")
filter_action = {
"device_id": "filter_1",
"action_name": "filter",
"action_kwargs": {
"vessel": {"id": vessel_id},
"vessel": {"id": vessel_id}, # 🔧 使用 vessel_id
"filtrate_vessel": actual_filtrate_vessel,
"temp": temp,
"volume": final_volume
}
}
action_sequence.append(filter_action)
# 更新体积 - 过滤后
filtered_volume = current_volume * 0.9
debug_print(f" ✅ 过滤动作: → {actual_filtrate_vessel} 🌊")
# 🔧 新增:更新体积 - 过滤后(液体被滤除)
# 假设滤液完全被移除,固体残留在容器中
filtered_volume = current_volume * 0.9 # 假设90%的液体被过滤掉
current_volume = current_volume - filtered_volume
if isinstance(vessel, dict):
update_vessel_volume(vessel, G, current_volume,
update_vessel_volume(vessel, G, current_volume,
f"{cycle+1}次清洗过滤后")
# 4. 等待
wait_time = 5.0
# 4. 等待(缩短时间)
wait_time = 5.0 # 🕐 缩短等待时间10s → 5s
action_sequence.append({
"action_name": "wait",
"action_kwargs": {"time": wait_time}
})
# 最终状态
debug_print(f" ✅ 等待: {wait_time}s ⏰")
# 🔧 新增:清洗完成后的最终状态报告
if isinstance(vessel, dict):
final_volume_vessel = get_resource_liquid_volume(vessel)
final_volume_vessel = get_vessel_liquid_volume(vessel)
else:
final_volume_vessel = current_volume
debug_print(f"固体清洗协议生成完成: {len(action_sequence)} 个动作, {final_repeats}次清洗, 溶剂总用量={total_solvent_used:.2f}mL")
# 🎊 总结
debug_print("🧼" * 20)
debug_print(f"🎉 固体清洗协议生成完成! ✨")
debug_print(f"📊 协议统计:")
debug_print(f" 📋 总动作数: {len(action_sequence)}")
debug_print(f" 🥽 清洗容器: {vessel_display}")
debug_print(f" 🧪 使用溶剂: {solvent}")
debug_print(f" 💧 单次体积: {final_volume}mL")
debug_print(f" 🔄 清洗次数: {final_repeats}")
debug_print(f" 💧 总溶剂用量: {total_solvent_used:.2f}mL")
debug_print(f"📊 体积变化统计:")
debug_print(f" - 清洗前体积: {original_volume:.2f}mL")
debug_print(f" - 清洗后体积: {final_volume_vessel:.2f}mL")
debug_print(f" - 溶剂总用量: {total_solvent_used:.2f}mL")
debug_print(f"⏱️ 预计总时间: {(final_time + 5) * final_repeats / 60:.1f} 分钟")
debug_print("🧼" * 20)
return action_sequence
# 便捷函数
def wash_with_water(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "50",
# 🔧 新增:便捷函数
def wash_with_water(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "50",
repeats: int = 2) -> List[Dict[str, Any]]:
"""用水清洗固体"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"💧 水洗固体: {vessel_display} ({repeats} 次)")
return generate_wash_solid_protocol(G, vessel, "water", volume=volume, repeats=repeats)
def wash_with_ethanol(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "30",
def wash_with_ethanol(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "30",
repeats: int = 1) -> List[Dict[str, Any]]:
"""用乙醇清洗固体"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"🍺 乙醇洗固体: {vessel_display} ({repeats} 次)")
return generate_wash_solid_protocol(G, vessel, "ethanol", volume=volume, repeats=repeats)
def wash_with_acetone(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "25",
def wash_with_acetone(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "25",
repeats: int = 1) -> List[Dict[str, Any]]:
"""用丙酮清洗固体"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"💨 丙酮洗固体: {vessel_display} ({repeats} 次)")
return generate_wash_solid_protocol(G, vessel, "acetone", volume=volume, repeats=repeats)
def wash_with_ether(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "40",
def wash_with_ether(G: nx.DiGraph, vessel: Union[str, dict],
volume: Union[float, str] = "40",
repeats: int = 2) -> List[Dict[str, Any]]:
"""用乙醚清洗固体"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"🌬️ 乙醚洗固体: {vessel_display} ({repeats} 次)")
return generate_wash_solid_protocol(G, vessel, "diethyl_ether", volume=volume, repeats=repeats)
def wash_with_cold_solvent(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "30",
def wash_with_cold_solvent(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "30",
repeats: int = 1) -> List[Dict[str, Any]]:
"""用冷溶剂清洗固体"""
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume,
vessel_display = get_vessel_display_info(vessel)
debug_print(f"❄️ 冷{solvent}洗固体: {vessel_display} ({repeats} 次)")
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume,
temp=5.0, repeats=repeats)
def wash_with_hot_solvent(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "50",
def wash_with_hot_solvent(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "50",
repeats: int = 1) -> List[Dict[str, Any]]:
"""用热溶剂清洗固体"""
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume,
vessel_display = get_vessel_display_info(vessel)
debug_print(f"🔥 热{solvent}洗固体: {vessel_display} ({repeats} 次)")
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume,
temp=60.0, repeats=repeats)
def wash_with_stirring(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "50",
stir_time: Union[str, float] = "5 min",
def wash_with_stirring(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "50",
stir_time: Union[str, float] = "5 min",
repeats: int = 1) -> List[Dict[str, Any]]:
"""带搅拌的溶剂清洗"""
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume,
stir=True, stir_speed=200.0,
vessel_display = get_vessel_display_info(vessel)
debug_print(f"🌪️ 搅拌清洗: {vessel_display} with {solvent} ({repeats} 次)")
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume,
stir=True, stir_speed=200.0,
time=stir_time, repeats=repeats)
def thorough_wash(G: nx.DiGraph, vessel: Union[str, dict],
def thorough_wash(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "50") -> List[Dict[str, Any]]:
"""彻底清洗(多次重复)"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"🔄 彻底清洗: {vessel_display} with {solvent} (5 次)")
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume, repeats=5)
def quick_rinse(G: nx.DiGraph, vessel: Union[str, dict],
def quick_rinse(G: nx.DiGraph, vessel: Union[str, dict],
solvent: str, volume: Union[float, str] = "20") -> List[Dict[str, Any]]:
"""快速冲洗(单次,小体积)"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"⚡ 快速冲洗: {vessel_display} with {solvent}")
return generate_wash_solid_protocol(G, vessel, solvent, volume=volume, repeats=1)
def sequential_wash(G: nx.DiGraph, vessel: Union[str, dict],
def sequential_wash(G: nx.DiGraph, vessel: Union[str, dict],
solvents: list, volume: Union[float, str] = "40") -> List[Dict[str, Any]]:
"""连续多溶剂清洗"""
vessel_display = get_vessel_display_info(vessel)
debug_print(f"📝 连续清洗: {vessel_display} with {''.join(solvents)}")
action_sequence = []
for solvent in solvents:
wash_actions = generate_wash_solid_protocol(G, vessel, solvent,
wash_actions = generate_wash_solid_protocol(G, vessel, solvent,
volume=volume, repeats=1)
action_sequence.extend(wash_actions)
return action_sequence
# 测试函数
def test_wash_solid_protocol():
"""测试固体清洗协议"""
debug_print("=== WASH SOLID PROTOCOL 测试 ===")
vessel_dict = {"id": "filter_flask_1", "name": "过滤瓶1",
debug_print("🧪 === WASH SOLID PROTOCOL 测试 ===")
# 测试vessel参数处理
debug_print("🔧 测试vessel参数处理...")
# 测试字典格式
vessel_dict = {"id": "filter_flask_1", "name": "过滤瓶1",
"data": {"liquid_volume": 25.0}}
vessel_id = extract_vessel_id(vessel_dict)
vessel_display = get_vessel_display_info(vessel_dict)
volume = get_resource_liquid_volume(vessel_dict)
debug_print(f"字典格式: ID={vessel_id}, 显示={vessel_display}, 体积={volume}mL")
volume = get_vessel_liquid_volume(vessel_dict)
debug_print(f" 字典格式: {vessel_dict}")
debug_print(f" → ID: {vessel_id}, 显示: {vessel_display}, 体积: {volume}mL")
# 测试字符串格式
vessel_str = "filter_flask_2"
vessel_id = extract_vessel_id(vessel_str)
vessel_display = get_vessel_display_info(vessel_str)
debug_print(f"字符串格式: ID={vessel_id}, 显示={vessel_display}")
debug_print("测试完成")
debug_print(f" 字符串格式: {vessel_str}")
debug_print(f" → ID: {vessel_id}, 显示: {vessel_display}")
debug_print("✅ 测试完成 🎉")
if __name__ == "__main__":
test_wash_solid_protocol()
test_wash_solid_protocol()

View File

@@ -16,17 +16,12 @@ class BasicConfig:
upload_registry = False
machine_name = "undefined"
vis_2d_enable = False
no_update_feedback = False
enable_resource_load = True
communication_protocol = "websocket"
startup_json_path = None # 填写绝对路径
disable_browser = False # 禁止浏览器自动打开
port = 8002 # 本地HTTP服务
check_mode = False # CI 检查模式,用于验证 registry 导入和文件一致性
test_mode = False # 测试模式,所有动作不实际执行,返回模拟结果
extra_resource = False # 是否加载lab_开头的额外资源
# 'TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
log_level: Literal["TRACE", "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] = "DEBUG"
log_level: Literal['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] = "DEBUG" # 'TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
@classmethod
def auth_secret(cls):
@@ -41,7 +36,7 @@ class BasicConfig:
class WSConfig:
reconnect_interval = 5 # 重连间隔(秒)
max_reconnect_attempts = 999 # 最大重连次数
ping_interval = 20 # ping间隔
ping_interval = 30 # ping间隔
# HTTP配置
@@ -70,14 +65,13 @@ def _update_config_from_module(module):
if not attr.startswith("_"):
setattr(obj, attr, getattr(getattr(module, name), attr))
def _update_config_from_env():
prefix = "UNILABOS_"
for env_key, env_value in os.environ.items():
if not env_key.startswith(prefix):
continue
try:
key_path = env_key[len(prefix) :] # Remove UNILAB_ prefix
key_path = env_key[len(prefix):] # Remove UNILAB_ prefix
class_field = key_path.upper().split("_", 1)
if len(class_field) != 2:
logger.warning(f"[ENV] 环境变量格式不正确:{env_key}")
@@ -147,5 +141,5 @@ def load_config(config_path=None):
traceback.print_exc()
exit(1)
else:
config_path = os.path.join(os.path.dirname(__file__), "example_config.py")
config_path = os.path.join(os.path.dirname(__file__), "local_config.py")
load_config(config_path)

View File

@@ -6,7 +6,7 @@ Coin Cell Assembly Workstation
"""
from typing import Dict, Any, List, Optional, Union
from unilabos.resources.resource_tracker import DeviceNodeResourceTracker
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker
from unilabos.device_comms.workstation_base import WorkstationBase, WorkflowInfo
from unilabos.device_comms.workstation_communication import (
WorkstationCommunicationBase, CommunicationConfig, CommunicationProtocol, CoinCellCommunication
@@ -61,7 +61,7 @@ class CoinCellAssemblyWorkstation(WorkstationBase):
# 创建资源跟踪器(如果没有提供)
if resource_tracker is None:
from unilabos.resources.resource_tracker import DeviceNodeResourceTracker
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker
resource_tracker = DeviceNodeResourceTracker()
# 初始化基类

Some files were not shown because too many files have changed in this diff Show More