Compare commits

..

32 Commits

Author SHA1 Message Date
a1b21e87b3 feat(scripts): 添加针对code_js1接口的压测脚本和结果
添加压测脚本run_stress_code_js1.sh和stress_code_js1.py,用于对code_js1接口进行并发压力测试
生成压测结果JSON文件,包含请求统计、延迟和吞吐量等指标
2025-10-10 23:23:02 +08:00
a4d20bc788 fix(http客户端): 增强超时错误处理并保留URL用于错误提示
在请求发送和响应体读取时添加明确的超时错误处理,使用clone保留URL以便在错误信息中显示
2025-10-09 23:31:00 +08:00
446f63e02a refactor(script_rhai): 重构 Rhai 脚本执行逻辑并优化代码结构
- 将文件脚本和 inline 脚本的执行逻辑统一到 exec_rhai_code 函数
- 优化 shallow_diff 函数的实现和可读性
- 提取 read_node_script_file 和 read_node_inline_script 辅助函数
- 清理冗余注释并重新组织导入语句
2025-09-29 22:35:47 +08:00
c8e026e1ff feat: 添加检查 git 服务器 SSL/TLS 状态的脚本 2025-09-29 22:35:31 +08:00
62ca2a7c4a refactor(engine): 增加最大操作数限制并优化变量解析逻辑
调整引擎的最大操作数限制从100,000到10,000,000以支持更复杂的表达式计算
优化变量解析逻辑,仅在明确以ctx[或ctx.开头时才进行表达式求值,其他情况保留原字符串
简化项目代码规范文档结构,保留核心规则并移除冗余内容
2025-09-28 22:14:03 +08:00
c462d266f1 feat(middleware): 添加全局认证拦截中间件
实现基于 Bearer token 的认证拦截中间件,支持路径白名单配置
2025-09-26 00:25:34 +08:00
214605d912 feat: 统一分页组件并添加批量删除功能
为多个页面组件添加统一的分页统计显示和批量删除功能
在日志管理页面添加批量删除接口和前端实现
优化表格分页配置,统一显示总条目数和分页选项
2025-09-25 23:52:01 +08:00
a71bbb0961 feat(migration): 添加 flow_run_logs 复合索引并修复 flow_id 类型
优化 flow_run_logs 查询性能,添加常用排序和过滤的复合索引
将 flow_id 从 VARCHAR(64) 改为 BIGINT 以匹配实体模型
在分页查询中实现末页优化策略
2025-09-25 22:38:06 +08:00
dfa1cbdd2f feat(前端): 优化页面布局和查询表单样式
- 在FlowList和FlowRunLogs页面添加PageHeader组件
- 统一ScheduleJobs页面的查询表单样式为内联布局
- 为表格分页添加showSizeChanger选项
- 新增Rust代码规范文档
2025-09-25 21:51:45 +08:00
3f5b52ec2a docs: 将文档链接路径统一迁移到docs目录下 2025-09-24 20:27:12 +08:00
a3f2f99a68 docs: 添加项目文档包括总览、架构、流程引擎和服务层
新增以下文档文件:
- PROJECT_OVERVIEW.md 项目总览文档
- BACKEND_ARCHITECTURE.md 后端架构文档
- FRONTEND_ARCHITECTURE.md 前端架构文档
- FLOW_ENGINE.md 流程引擎文档
- SERVICES.md 服务层文档
- ERROR_HANDLING.md 错误处理模块文档

文档内容涵盖项目整体介绍、技术架构、核心模块设计和实现细节
2025-09-24 20:21:45 +08:00
6ff587dc23 refactor: 重构文档结构和文件位置
docs: 添加Redis集成测试文档
docs: 添加ID生成器分析报告
docs: 添加自由布局和固定布局示例文档
test: 添加ID生成器单元测试
fix: 删除重复的前端文档文件
2025-09-24 01:10:01 +08:00
adcd49a5db fix(部署脚本): 增强后端部署脚本的健壮性和调试信息
添加路径检查和进程验证逻辑,改进错误处理和调试日志输出
2025-09-24 00:27:28 +08:00
8c06849254 feat(调度任务): 实现调度任务管理功能
新增调度任务模块,支持任务的增删改查、启停及手动执行
- 后端添加 schedule_job 模型、服务、路由及调度器工具类
- 前端新增调度任务管理页面
- 修改 flow 相关接口将 id 类型从 String 改为 i64
- 添加 tokio-cron-scheduler 依赖实现定时任务调度
- 初始化时加载已启用任务并注册到调度器
2025-09-24 00:21:30 +08:00
cadd336dee feat(ids): 实现基于Snowflake的分布式ID生成功能
新增rs-snowflake依赖并实现分布式ID生成工具
在utils模块中添加ids子模块,提供业务ID生成与解析功能
替换原有UUID生成方式为分布式ID生成器
2025-09-23 00:22:06 +08:00
89baf9a96b feat: 添加前端和后端部署脚本
添加 deploy_frontend.sh 用于构建和发布前端代码到 OpenResty 站点目录
添加 deploy_backend.sh 用于构建和部署后端服务,支持优雅停止和日志轮转
2025-09-22 22:23:50 +08:00
681abeed45 feat(flow): 添加流程信息展示组件及后端支持
新增左上角流程信息展示组件,显示流程编码和名称
后端 FlowDoc 结构增加 name/code/remark 字段支持
添加从 YAML 提取名称的兜底逻辑
2025-09-22 22:15:45 +08:00
cb0d829884 fix(flow): 修复Rhai脚本执行错误处理并优化变量解析逻辑
refactor(engine): 重构Rhai表达式错误处理为枚举类型
fix(script_rhai): 修正脚本文件读取和执行失败的错误返回
perf(testrun): 优化前端测试面板日志去重和显示逻辑
2025-09-22 20:25:05 +08:00
3362268575 feat(flow): 改进 Rhai 脚本执行错误处理和前后端代码节点映射
- 修改 eval_rhai_expr_json 返回 Result 以提供错误信息
- 统一使用 unwrap_or_else 处理 Rhai 表达式执行错误
- 前后端代码节点类型映射支持 JavaScript 和 Rhai 语言
- 前端代码编辑器添加语言选择器
- 优化 WebSocket 错误处理和关闭逻辑
2025-09-22 06:52:22 +08:00
067c6829f0 refactor(websocket): 重构 WebSocket 和 SSE 连接逻辑以支持开发和生产环境
- 统一处理 WebSocket 和 SSE 的 URL 构造逻辑
- 开发环境使用代理前缀,生产环境使用同域路径
- 移除硬编码端口,通过环境变量配置
2025-09-21 23:52:06 +08:00
7637a5c225 fix(proxy): 更新预览端口并优化SSE和WS代理配置
- 将预览端口从5173改为8888以避免冲突
- 重构SSE代理配置,简化逻辑并修复连接问题
- 新增WS代理路径,支持WebSocket独立代理
2025-09-21 23:36:23 +08:00
30716686ed feat(ws): 新增WebSocket实时通信支持与SSE独立服务
重构中间件结构,新增ws模块实现WebSocket流程执行实时推送
将SSE服务拆分为独立端口监听,默认8866
优化前端流式模式切换,支持WS/SSE协议选择
统一流式事件处理逻辑,完善错误处理与取消机制
更新Cargo.toml依赖,添加WebSocket相关库
调整代码组织结构,规范导入分组与注释
2025-09-21 22:15:33 +08:00
dd7857940f feat(flow): 新增流式执行模式与SSE支持
新增流式执行模式,通过SSE实时推送节点执行事件与日志
重构HTTP执行器与中间件,提取通用HTTP客户端组件
优化前端测试面板,支持流式模式切换与实时日志展示
更新依赖版本并修复密码哈希的随机数生成器问题
修复前端节点类型映射问题,确保Code节点表单可用
2025-09-21 01:48:24 +08:00
296f0ae9f6 feat(backend): 新增 QuickJS 运行时支持 JavaScript 执行器
refactor(backend): 重构 script_js 执行器实现 JavaScript 文件/内联脚本执行
feat(backend): 变量节点支持表达式/引用快捷语法输入
docs: 添加变量节点使用文档说明快捷语法功能
style(frontend): 调整测试面板样式和布局
fix(frontend): 修复测试面板打开时自动关闭节点编辑侧栏
build(backend): 添加 rquickjs 依赖用于 JavaScript 执行
2025-09-20 17:35:36 +08:00
baa787934a refactor: 允许未使用的get_result_mode_from_conn函数 2025-09-20 00:17:31 +08:00
d8116ff8dc feat(flow): 添加动态API路由支持通过流程code执行
refactor(engine): 优化节点执行耗时记录
fix(db): 修正结果模式获取逻辑忽略connection.mode
style(i18n): 统一节点描述和输出模式选项的国际化
test(flow): 新增测试流程定义文件
refactor(react): 简化开发环境日志降噪处理
2025-09-20 00:12:40 +08:00
62789fce42 feat: 新增条件节点和多语言脚本支持
refactor(flow): 将Decision节点重命名为Condition节点
feat(flow): 新增多语言脚本执行器(Rhai/JS/Python)
feat(flow): 实现变量映射和执行功能
feat(flow): 添加条件节点执行逻辑
feat(frontend): 为开始/结束节点添加多语言描述
test: 添加yaml条件转换测试
chore: 移除废弃的storage模块
2025-09-19 13:41:52 +08:00
81757eecf5 feat(flow): 重构流程引擎与任务执行器架构
重构流程引擎核心组件,引入执行器接口Executor替代原有TaskComponent,优化节点配置映射逻辑:
1. 新增mappers模块集中处理节点配置提取
2. 为存储层添加Storage trait抽象
3. 移除对ctx魔法字段的依赖,直接传递节点信息
4. 增加构建器模式支持引擎创建
5. 完善DSL解析的输入校验

同时标记部分未使用代码为allow(dead_code)
2025-09-16 23:58:28 +08:00
65764a2cbc feat(FlowList): 为删除操作添加确认弹窗并改进错误处理
添加 Popconfirm 组件以防止误删流程,同时优化删除操作的错误提示
2025-09-15 23:22:26 +08:00
7c201f9083 refactor(组件): 将 destroyOnClose 替换为 destroyOnHidden 以优化组件销毁逻辑
refactor(React工具): 重构 React 18 兼容性补丁,合并开发环境修复功能

优化多个组件中的销毁逻辑,统一使用 destroyOnHidden 替代 destroyOnClose。同时重构 React 18 兼容性补丁代码,将开发环境的相关修复功能整合到 setupReactDevFixes 方法中,提高代码可维护性。
2025-09-15 22:04:02 +08:00
17de176609 feat(变量节点): 添加变量赋值类型定义并优化节点菜单
refactor: 简化json-schema类型导入
chore: 更新依赖并调整tsconfig配置
2025-09-15 01:07:54 +08:00
b0963e5e37 feat(flows): 新增流程编辑器基础功能与相关组件
feat(backend): 添加流程模型与服务支持
feat(frontend): 实现流程编辑器UI与交互
feat(assets): 添加流程节点图标资源
feat(plugins): 实现上下文菜单和运行时插件
feat(components): 新增基础节点和侧边栏组件
feat(routes): 添加流程相关路由配置
feat(models): 创建流程和运行日志数据模型
feat(services): 实现流程服务层逻辑
feat(migration): 添加流程相关数据库迁移
feat(config): 更新前端配置支持流程编辑器
feat(utils): 增强axios错误处理和工具函数
2025-09-15 00:27:13 +08:00
369 changed files with 36571 additions and 600 deletions

60
.trae/rules/code_rules.md Normal file
View File

@ -0,0 +1,60 @@
# Rust 复杂文件代码规范顺序(紧凑版)
1. **文件说明**
- 使用 `//!` 模块文档注释,描述文件或模块作用。
- 必须位于文件最顶部。
2. **版权声明(可选)**
- 如果项目需要版权信息,放在文档注释之后。
3. **外部依赖和模块引入use**
- 所有 `use` 必须放在文件顶部,不能出现在中间或底部。
- 引入顺序:标准库 -> 当前 crate 模块 -> 父模块 -> 第三方库
- 每一类之间空 1 行,同一类内部按字母序排列
- 所有 `use` **不允许**添加注释说明分组
- 紧凑要求:单行导入尽量合并,不要强行拆成多行
4. **子模块声明mod**
- 写在所有 `use` 之后
- `pub mod``mod` 都在这里集中声明
5. **常量定义const / static**
- 必须在类型定义之前
- 命名使用 `SCREAMING_SNAKE_CASE`
- 紧凑要求:值较短时保持单行,不要拆行
6. **类型别名type**
- 放在常量之后、结构体之前
7. **结构体定义struct**
- 放在文件的类型区域开头
- 所有 `struct` 必须集中写在这里,不允许与 `impl` 穿插
- 紧凑要求:字段数量不多时保持单行;字段多时每行一个
8. **枚举定义enum**
- 紧随结构体之后,集中定义所有枚举类型
- 紧凑要求:简单枚举保持单行,复杂枚举多行
9. **联合体定义union**
- 如果需要,放在枚举之后
10. **Trait 定义**
- 紧跟在所有数据类型struct/enum/union之后
11. **实现块impl Struct**
- 必须放在类型定义和 Trait 定义之后
- 内部方法顺序:构造函数 -> 公共方法 -> 私有方法
- 紧凑要求函数体内调用链、元组、match 分支若能在一行内写清楚,则保持单行
12. **Trait 实现impl Trait for Struct**
- 紧跟对应的 `impl Struct` 之后
- 不要与其他类型的实现混在一起
13. **自由函数fn不属于任何 impl**
- 文件中独立函数放在最后
- 按功能分组,可以用注释分隔
- 紧凑要求:参数少的函数调用保持单行
14. **测试模块(#[cfg(test)] mod tests**
- 永远放在文件最末尾
- 仅用于单元测试代码

View File

@ -0,0 +1,124 @@
//! 模块定时任务服务Service Layer
//! 职责:
//! 1) 负责定时任务schedule_jobs的数据库增删改查
//! 2) 在创建/更新/删除后与调度器同步;
//! 3) 服务启动时加载已启用任务并注册。
use std::{future::Future, pin::Pin, sync::Arc};
use chrono::{DateTime, FixedOffset, Utc};
use sea_orm::{ActiveModelTrait, ColumnTrait, EntityTrait, QueryFilter, QueryOrder, Set};
use tokio_cron_scheduler::Job;
use tracing::{error, info};
use crate::{db::Db, error::AppError, models::schedule_job, utils};
/// 通用分页响应体
#[derive(serde::Serialize)]
pub struct PageResp<T> {
pub items: Vec<T>,
pub total: u64,
pub page: u64,
pub page_size: u64,
}
/// 任务文档(对外返回 DTO
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug)]
pub struct ScheduleJobDoc {
pub id: i64,
pub name: String,
pub cron_expr: String,
pub enabled: bool,
pub flow_code: String,
pub created_at: DateTime<FixedOffset>,
pub updated_at: DateTime<FixedOffset>,
}
impl From<schedule_job::Model> for ScheduleJobDoc {
fn from(m: schedule_job::Model) -> Self {
Self {
id: m.id,
name: m.name,
cron_expr: m.cron_expr,
enabled: m.enabled,
flow_code: m.flow_code,
created_at: m.created_at,
updated_at: m.updated_at,
}
}
}
/// 创建任务请求体
#[derive(serde::Deserialize)]
pub struct CreateReq {
pub name: String,
pub cron_expr: String,
pub enabled: bool,
pub flow_code: String,
}
/// 获取当前 UTC 时间并转为固定偏移
fn now_fixed_offset() -> DateTime<FixedOffset> {
Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap())
}
/// 创建任务
pub async fn create(db: &Db, req: CreateReq) -> Result<ScheduleJobDoc, AppError> {
// 1) 校验 cron 表达式
Job::new_async(&req.cron_expr, |_id, _l| Box::pin(async {}))
.map_err(|e| AppError::BadRequest(format!("无效的 cron 表达式: {e}")))?;
// 2) 入库
let am = schedule_job::ActiveModel {
id: Set(crate::utils::generate_id()),
name: Set(req.name),
cron_expr: Set(req.cron_expr),
enabled: Set(req.enabled),
flow_code: Set(req.flow_code),
created_at: Set(now_fixed_offset()),
updated_at: Set(now_fixed_offset()),
};
let m = am.insert(db).await?;
// 3) 同步调度器
let executor = build_executor_for_job(db, &m);
utils::add_or_update_job_by_model(&m, executor).await.map_err(AppError::Anyhow)?;
Ok(m.into())
}
/// 构建任务执行闭包JobExecutor
fn build_executor_for_job(db: &Db, m: &schedule_job::Model) -> utils::JobExecutor {
let db = db.clone();
let job_id = m.id;
let job_name = m.name.clone();
Arc::new(move || {
let db = db.clone();
let job_id = job_id;
let job_name = job_name.clone();
Box::pin(async move {
match schedule_job::Entity::find_by_id(job_id).one(&db).await {
Ok(Some(model)) if !model.enabled => {
info!(target = "udmin", job = %job_name, id = %job_id, "scheduler.tick.skip");
return;
}
Ok(None) => {
info!(target = "udmin", job = %job_name, id = %job_id, "scheduler.tick.deleted");
if let Err(e) = utils::remove_job_by_id(&job_id).await {
error!(target = "udmin", id = %job_id, error = %e, "scheduler.self_remove.failed");
}
return;
}
Err(e) => {
error!(target = "udmin", job = %job_name, id = %job_id, error = %e, "scheduler.tick.error");
return;
}
_ => {}
}
info!(target = "udmin", job = %job_name, "scheduler.tick.start");
}) as Pin<Box<dyn Future<Output = ()> + Send>>
})
}

View File

@ -0,0 +1,60 @@
# Rust 复杂文件代码规范顺序(紧凑版)
1. **文件说明**
- 使用 `//!` 模块文档注释,描述文件或模块作用。
- 必须位于文件最顶部。
2. **版权声明(可选)**
- 如果项目需要版权信息,放在文档注释之后。
3. **外部依赖和模块引入use**
- 所有 `use` 必须放在文件顶部,不能出现在中间或底部。
- 引入顺序:标准库 -> 当前 crate 模块 -> 父模块 -> 第三方库
- 每一类之间空 1 行,同一类内部按字母序排列
- 所有 `use` **不允许**添加注释说明分组
- 紧凑要求:单行导入尽量合并,不要强行拆成多行
4. **子模块声明mod**
- 写在所有 `use` 之后
- `pub mod``mod` 都在这里集中声明
5. **常量定义const / static**
- 必须在类型定义之前
- 命名使用 `SCREAMING_SNAKE_CASE`
- 紧凑要求:值较短时保持单行,不要拆行
6. **类型别名type**
- 放在常量之后、结构体之前
7. **结构体定义struct**
- 放在文件的类型区域开头
- 所有 `struct` 必须集中写在这里,不允许与 `impl` 穿插
- 紧凑要求:字段数量不多时保持单行;字段多时每行一个
8. **枚举定义enum**
- 紧随结构体之后,集中定义所有枚举类型
- 紧凑要求:简单枚举保持单行,复杂枚举多行
9. **联合体定义union**
- 如果需要,放在枚举之后
10. **Trait 定义**
- 紧跟在所有数据类型struct/enum/union之后
11. **实现块impl Struct**
- 必须放在类型定义和 Trait 定义之后
- 内部方法顺序:构造函数 -> 公共方法 -> 私有方法
- 紧凑要求函数体内调用链、元组、match 分支若能在一行内写清楚,则保持单行
12. **Trait 实现impl Trait for Struct**
- 紧跟对应的 `impl Struct` 之后
- 不要与其他类型的实现混在一起
13. **自由函数fn不属于任何 impl**
- 文件中独立函数放在最后
- 按功能分组,可以用注释分隔
- 紧凑要求:参数少的函数调用保持单行
14. **测试模块(#[cfg(test)] mod tests**
- 永远放在文件最末尾
- 仅用于单元测试代码

138
README.md Normal file
View File

@ -0,0 +1,138 @@
# UdminAI 项目文档
欢迎来到 UdminAI 项目文档中心。本文档集合提供了项目的完整技术文档涵盖了架构设计、模块说明、API 文档和最佳实践等内容。
## 📚 文档导航
### 🏗️ 架构文档
- **[项目概览](docs/PROJECT_OVERVIEW.md)** - UdminAI 项目的整体介绍、技术架构和核心功能
- **[后端架构](docs/BACKEND_ARCHITECTURE.md)** - 后端系统的详细架构设计和模块组织
- **[前端架构](docs/FRONTEND_ARCHITECTURE.md)** - 前端应用的架构设计、技术栈和组件体系
### 🔧 核心模块
- **[流程引擎](docs/FLOW_ENGINE.md)** - 流程编排引擎的设计原理、执行机制和扩展能力
- **[服务层](docs/SERVICES.md)** - 业务服务层的设计模式、核心服务和集成方式
- **[路由层](docs/ROUTES.md)** - API 路由的设计规范、接口定义和中间件集成
- **[数据模型](docs/MODELS.md)** - 数据模型的设计原则、实体定义和关系映射
- **[中间件](docs/MIDDLEWARES.md)** - 中间件系统的设计理念、核心组件和使用方式
### 🛠️ 基础设施
- **[数据库](docs/DATABASE.md)** - 数据库连接管理、查询优化和性能调优
- **[错误处理](docs/ERROR_HANDLING.md)** - 统一错误处理机制、错误类型定义和处理策略
- **[响应格式](docs/RESPONSE.md)** - API 响应格式规范、数据结构和最佳实践
- **[工具函数](docs/UTILS.md)** - 通用工具函数库、辅助工具和实用程序
### 📋 专项文档
- **[Redis 集成](docs/REDIS_INTEGRATION.md)** - Redis 缓存系统的集成方案和使用指南
- **[ID 生成分析](docs/ID_GENERATION_ANALYSIS.md)** - 分布式 ID 生成策略和实现分析
### 🎯 流程编辑器
- **[变量节点使用](docs/variable-node-usage.md)** - 流程变量节点的使用方法和最佳实践
- **[固定布局演示](docs/flow-fixed-layout-demo.md)** - 固定布局流程编辑器的演示和说明
- **[自由布局演示](docs/flow-free-layout-demo.md)** - 自由布局流程编辑器的功能展示
- **[基础自由布局](docs/flow-free-layout-base-demo.md)** - 自由布局的基础功能和操作指南
- **[简单自由布局](docs/flow-free-layout-simple-demo.md)** - 简化版自由布局编辑器的使用说明
- **[自由布局 JSON](docs/flow-free-layout-json.md)** - 自由布局的 JSON 数据结构定义
- **[SJ 自由布局演示](docs/flow-free-layout-sj-demo.md)** - SJ 版本自由布局编辑器的特性说明
## 🚀 快速开始
### 新手指南
如果你是第一次接触 UdminAI 项目,建议按以下顺序阅读文档:
1. **[项目概览](docs/PROJECT_OVERVIEW.md)** - 了解项目整体架构和核心概念
2. **[后端架构](docs/BACKEND_ARCHITECTURE.md)** - 理解后端系统的设计思路
3. **[前端架构](docs/FRONTEND_ARCHITECTURE.md)** - 掌握前端应用的组织结构
4. **[流程引擎](docs/FLOW_ENGINE.md)** - 深入了解核心的流程编排能力
### 开发者指南
对于参与开发的团队成员,重点关注以下文档:
- **[服务层](docs/SERVICES.md)** - 业务逻辑的实现规范
- **[路由层](docs/ROUTES.md)** - API 接口的设计标准
- **[数据模型](docs/MODELS.md)** - 数据结构的定义规则
- **[错误处理](docs/ERROR_HANDLING.md)** - 错误处理的统一方案
### 运维指南
对于系统运维和部署,参考以下文档:
- **[数据库](docs/DATABASE.md)** - 数据库的配置和优化
- **[Redis 集成](docs/REDIS_INTEGRATION.md)** - 缓存系统的部署和管理
- **[中间件](docs/MIDDLEWARES.md)** - 中间件的配置和监控
## 📖 文档约定
### 文档结构
每个模块文档都遵循统一的结构:
- **概述** - 模块的基本介绍和设计目标
- **设计原则** - 核心的设计理念和约束条件
- **核心功能** - 主要功能特性和实现方式
- **使用示例** - 具体的代码示例和使用方法
- **最佳实践** - 推荐的使用模式和注意事项
- **总结** - 模块特点和价值总结
### 代码示例
文档中的代码示例都经过验证,可以直接在项目中使用。代码遵循项目的编码规范:
- **Rust 代码** - 遵循 Rust 2021 edition 和项目编码规范
- **TypeScript 代码** - 遵循 TypeScript 严格模式和 ESLint 规则
- **配置文件** - 使用 YAML/TOML 格式,保持简洁清晰
### 更新机制
文档与代码同步更新,确保文档的时效性和准确性:
- **版本控制** - 文档与代码使用相同的版本管理
- **持续集成** - 代码变更时自动检查文档的一致性
- **定期审查** - 定期审查和更新文档内容
## 🤝 贡献指南
### 文档贡献
欢迎为项目文档做出贡献:
1. **发现问题** - 如果发现文档中的错误或不准确之处,请提交 Issue
2. **改进建议** - 对文档结构或内容有改进建议,欢迎讨论
3. **新增内容** - 可以补充缺失的文档或添加新的使用案例
4. **翻译工作** - 可以帮助将文档翻译成其他语言
### 文档规范
贡献文档时请遵循以下规范:
- **Markdown 格式** - 使用标准的 Markdown 语法
- **中文写作** - 使用简洁明了的中文表达
- **代码高亮** - 为代码块指定正确的语言类型
- **链接检查** - 确保所有链接都是有效的
## 📞 获取帮助
如果在使用过程中遇到问题,可以通过以下方式获取帮助:
- **查阅文档** - 首先查看相关的模块文档
- **搜索 Issue** - 在项目 Issue 中搜索类似问题
- **提交 Issue** - 如果问题未解决,请提交新的 Issue
- **社区讨论** - 参与项目的社区讨论
## 📄 许可证
本文档遵循与项目相同的许可证。详细信息请参考项目根目录的 LICENSE 文件。
---
**UdminAI 团队**
*构建智能化的流程管理平台*
> 💡 **提示**: 建议将本文档加入浏览器书签,方便随时查阅。文档会持续更新,请关注最新版本。

View File

@ -7,7 +7,7 @@ JWT_SECRET=dev_secret_change_me
JWT_ISS=udmin JWT_ISS=udmin
JWT_ACCESS_EXP_SECS=1800 JWT_ACCESS_EXP_SECS=1800
JWT_REFRESH_EXP_SECS=1209600 JWT_REFRESH_EXP_SECS=1209600
CORS_ALLOW_ORIGINS=http://localhost:5173,http://localhost:5174,http://localhost:5175 CORS_ALLOW_ORIGINS=http://localhost:5173,http://localhost:5174,http://localhost:5175,http://127.0.0.1:5173,http://127.0.0.1:5174,http://127.0.0.1:5175,http://localhost:8888,http://127.0.0.1:8888
# Redis配置 # Redis配置
REDIS_URL=redis://:123456@127.0.0.1:6379/9 REDIS_URL=redis://:123456@127.0.0.1:6379/9

1081
backend/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,44 +1,51 @@
[package] [package]
name = "udmin" name = "udmin"
version = "0.1.0" version = "0.1.0"
edition = "2024" # ✅ 升级到最新 Rust Edition edition = "2024"
default-run = "udmin" default-run = "udmin"
[dependencies] [dependencies]
axum = "0.8.4" axum = { version = "0.8.4", features = ["ws"] }
tokio = { version = "1.47.1", features = ["full"] } tokio = { version = "1.47.1", features = ["full"] }
tower = "0.5.0" tower = "0.5.2"
tower-http = { version = "0.6.6", features = ["cors", "trace"] } tower-http = { version = "0.6.6", features = ["cors", "trace"] }
hyper = { version = "1" } hyper = { version = "1" }
bytes = "1" bytes = "1"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0.225", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
serde_with = "3.14.0" serde_with = "3.14.0"
sea-orm = { version = "1.1.14", features = ["sqlx-mysql", "sqlx-sqlite", "runtime-tokio-rustls", "macros"] } sea-orm = { version = "1.1.14", features = ["sqlx-mysql", "sqlx-sqlite", "sqlx-postgres", "runtime-tokio-rustls", "macros"] }
jsonwebtoken = "9.3.1" jsonwebtoken = "9.3.1"
argon2 = "0.5.3" # 或升级到 3.0.0(注意 API 可能不兼容) argon2 = "0.5.3" # 或升级到 3.0.0(注意 API 可能不兼容)
uuid = { version = "1.11.0", features = ["serde", "v4"] } uuid = { version = "1.11.0", features = ["serde", "v4"] }
chrono = { version = "0.4", features = ["serde"] } chrono = { version = "0.4", features = ["serde"] }
tracing = "0.1" tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] } tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
config = "0.14" config = "0.15.16"
dotenvy = "0.15" dotenvy = "0.15.7"
thiserror = "1.0" thiserror = "2.0.16"
anyhow = "1.0" anyhow = "1.0"
once_cell = "1.19.0" once_cell = "1.19.0"
utoipa = { version = "5.4.0", features = ["axum_extras", "chrono", "uuid"] } utoipa = { version = "5.4.0", features = ["axum_extras", "chrono", "uuid"] }
utoipa-swagger-ui = { version = "6.0.0", features = ["axum"] } utoipa-swagger-ui = { version = "9.0.2", features = ["axum"] }
sha2 = "0.10" sha2 = "0.10"
rand = "0.8" rand = "0.9.2"
async-trait = "0.1" async-trait = "0.1.89"
redis = { version = "0.27", features = ["tokio-comp", "connection-manager"] } redis = { version = "0.32.5", features = ["tokio-comp", "connection-manager"] }
petgraph = "0.8.2"
# 流程管理相关依赖 rhai = { version = "1.23.4", features = ["serde", "metadata", "internals"] }
petgraph = "0.6"
rhai = { version = "1.17", features = ["serde", "metadata", "internals"] }
serde_yaml = "0.9" serde_yaml = "0.9"
regex = "1.10" regex = "1.11.2"
reqwest = { version = "0.11", features = ["json", "rustls-tls"], default-features = false } reqwest = { version = "0.12.23", features = ["json", "rustls-tls-native-roots"], default-features = false }
futures = "0.3.31"
percent-encoding = "2.3"
tokio-cron-scheduler = "0.14.0"
# 新增: QuickJS 运行时用于 JS 执行器(不启用额外特性)
rquickjs = "0.9.0"
# 新增: 用于将 mpsc::Receiver 封装为 StreamSSE
tokio-stream = "0.1.17"
# 新增: 分布式ID生成Snowflake
rs-snowflake = "0.6.0"
[dependencies.migration] [dependencies.migration]
path = "migration" path = "migration"
@ -46,3 +53,6 @@ path = "migration"
[profile.release] [profile.release]
lto = true lto = true
codegen-units = 1 codegen-units = 1
[dev-dependencies]
wiremock = "0.6"

View File

@ -1,5 +0,0 @@
# Netscape HTTP Cookie File
# https://curl.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
#HttpOnly_localhost FALSE / FALSE 1756996792 refresh_token eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsInVpZCI6MSwiaXNzIjoidWRtaW4iLCJleHAiOjE3NTY5OTY3OTIsInR5cCI6InJlZnJlc2gifQ.XllW1VXSni2F548WFm1tjG3gwWPou_QE1JRabz0RZTE

View File

@ -11,6 +11,24 @@ mod m20220101_000008_add_keep_alive_to_menus;
mod m20220101_000009_create_request_logs; mod m20220101_000009_create_request_logs;
// 新增岗位与用户岗位关联 // 新增岗位与用户岗位关联
mod m20220101_000010_create_positions; mod m20220101_000010_create_positions;
// 占位:历史上已应用但缺失的工作流相关迁移
mod m20220101_000011_create_workflows;
mod m20220101_000012_create_workflow_executions;
mod m20220101_000013_create_workflow_execution_logs;
mod m20220101_000014_create_flows;
// 新增 flows 的 code 与 remark 列
mod m20220101_000015_add_code_and_remark_to_flows;
mod m20220101_000016_dedup_flows_code;
mod m20220101_000016_add_unique_index_to_flows_code;
mod m20220101_000017_create_flow_run_logs;
// 新增:为 flow_run_logs 添加 flow_code 列
mod m20220101_000018_add_flow_code_to_flow_run_logs;
// 新增:计划任务表
mod m20220101_000019_create_schedule_jobs;
// 新增:为 flow_run_logs 创建复合索引
mod m20220101_000020_add_indexes_to_flow_run_logs;
// 修复 flow_run_logs.flow_id 类型为 BIGINT
mod m20220101_000021_alter_flow_run_logs_flow_id_to_bigint;
pub struct Migrator; pub struct Migrator;
@ -29,6 +47,26 @@ impl MigratorTrait for Migrator {
Box::new(m20220101_000009_create_request_logs::Migration), Box::new(m20220101_000009_create_request_logs::Migration),
// 注册岗位迁移 // 注册岗位迁移
Box::new(m20220101_000010_create_positions::Migration), Box::new(m20220101_000010_create_positions::Migration),
// 占位:历史上已应用但缺失的工作流相关迁移
Box::new(m20220101_000011_create_workflows::Migration),
Box::new(m20220101_000012_create_workflow_executions::Migration),
Box::new(m20220101_000013_create_workflow_execution_logs::Migration),
// 新增 flows 表
Box::new(m20220101_000014_create_flows::Migration),
// 新增 flows 的 code 与 remark 列
Box::new(m20220101_000015_add_code_and_remark_to_flows::Migration),
// 先去重再建唯一索引
Box::new(m20220101_000016_dedup_flows_code::Migration),
Box::new(m20220101_000016_add_unique_index_to_flows_code::Migration),
Box::new(m20220101_000017_create_flow_run_logs::Migration),
// 新增:为 flow_run_logs 添加 flow_code 列
Box::new(m20220101_000018_add_flow_code_to_flow_run_logs::Migration),
// 新增:计划任务表(恢复注册)
Box::new(m20220101_000019_create_schedule_jobs::Migration),
// 修复 flow_run_logs.flow_id 类型为 BIGINT
Box::new(m20220101_000021_alter_flow_run_logs_flow_id_to_bigint::Migration),
// 新增:为 flow_run_logs 创建复合索引
Box::new(m20220101_000020_add_indexes_to_flow_run_logs::Migration),
] ]
} }
} }

View File

@ -24,9 +24,10 @@ impl MigrationTrait for Migration {
) )
.await?; .await?;
// seed admin user (cross-DB) // seed admin user for all DB backends
// NOTE: test default password is fixed to '123456' for local/dev testing
let salt = SaltString::generate(&mut rand::thread_rng()); let salt = SaltString::generate(&mut rand::thread_rng());
let hash = Argon2::default().hash_password("Admin@123".as_bytes(), &salt).unwrap().to_string(); let hash = Argon2::default().hash_password("123456".as_bytes(), &salt).unwrap().to_string();
let backend = manager.get_database_backend(); let backend = manager.get_database_backend();
let conn = manager.get_connection(); let conn = manager.get_connection();
match backend { match backend {

View File

@ -0,0 +1,16 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
// 占位迁移:历史上已应用,但当前代码库缺失实际文件
Ok(())
}
async fn down(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
Ok(())
}
}

View File

@ -0,0 +1,16 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
// 占位迁移:历史上已应用,但当前代码库缺失实际文件
Ok(())
}
async fn down(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
Ok(())
}
}

View File

@ -0,0 +1,16 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
// 占位迁移:历史上已应用,但当前代码库缺失实际文件
Ok(())
}
async fn down(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
Ok(())
}
}

View File

@ -0,0 +1,40 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(Flows::Table)
.if_not_exists()
.col(ColumnDef::new(Flows::Id).string_len(64).not_null().primary_key())
.col(ColumnDef::new(Flows::Name).string().null())
.col(ColumnDef::new(Flows::Yaml).text().null())
.col(ColumnDef::new(Flows::DesignJson).text().null())
.col(ColumnDef::new(Flows::CreatedAt).timestamp().not_null().default(Expr::current_timestamp()))
.col(ColumnDef::new(Flows::UpdatedAt).timestamp().not_null().default(Expr::current_timestamp()))
.to_owned()
)
.await?;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager.drop_table(Table::drop().table(Flows::Table).to_owned()).await
}
}
#[derive(Iden)]
enum Flows {
Table,
Id,
Name,
Yaml,
DesignJson,
CreatedAt,
UpdatedAt,
}

View File

@ -0,0 +1,56 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
// SQLite 不支持单条 ALTER 语句包含多个操作,拆分为两次执行
manager
.alter_table(
Table::alter()
.table(Flows::Table)
.add_column(ColumnDef::new(Flows::Code).string().null())
.to_owned()
)
.await?;
manager
.alter_table(
Table::alter()
.table(Flows::Table)
.add_column(ColumnDef::new(Flows::Remark).string().null())
.to_owned()
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
// 与 up 相反顺序,依然需要拆分为两次执行
manager
.alter_table(
Table::alter()
.table(Flows::Table)
.drop_column(Flows::Remark)
.to_owned()
)
.await?;
manager
.alter_table(
Table::alter()
.table(Flows::Table)
.drop_column(Flows::Code)
.to_owned()
)
.await
}
}
#[derive(DeriveIden)]
enum Flows {
Table,
Code,
Remark,
}

View File

@ -0,0 +1,37 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_index(
Index::create()
.name("idx-unique-flows-code")
.table(Flows::Table)
.col(Flows::Code)
.unique()
.to_owned(),
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_index(
Index::drop()
.name("idx-unique-flows-code")
.table(Flows::Table)
.to_owned(),
)
.await
}
}
#[derive(DeriveIden)]
enum Flows {
Table,
Code,
}

View File

@ -0,0 +1,65 @@
use sea_orm_migration::prelude::*;
use sea_orm_migration::sea_orm::Statement;
use sea_orm_migration::sea_orm::DatabaseBackend;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
let db = manager.get_connection();
match manager.get_database_backend() {
DatabaseBackend::MySql => {
// 将重复的 code 置为 NULL仅保留每组中 id 最小的一条
let sql = r#"
UPDATE flows f
JOIN (
SELECT code, MIN(id) AS min_id
FROM flows
WHERE code IS NOT NULL
GROUP BY code
HAVING COUNT(*) > 1
) t ON f.code = t.code AND f.id <> t.min_id
SET f.code = NULL;
"#;
db.execute(Statement::from_string(DatabaseBackend::MySql, sql.to_string())).await?;
Ok(())
}
DatabaseBackend::Postgres => {
let sql = r#"
WITH d AS (
SELECT id, ROW_NUMBER() OVER(PARTITION BY code ORDER BY id) AS rn
FROM flows
WHERE code IS NOT NULL
)
UPDATE flows AS f
SET code = NULL
FROM d
WHERE f.id = d.id AND d.rn > 1;
"#;
db.execute(Statement::from_string(DatabaseBackend::Postgres, sql.to_string())).await?;
Ok(())
}
DatabaseBackend::Sqlite => {
let sql = r#"
WITH d AS (
SELECT id, ROW_NUMBER() OVER(PARTITION BY code ORDER BY id) AS rn
FROM flows
WHERE code IS NOT NULL
)
UPDATE flows
SET code = NULL
WHERE id IN (SELECT id FROM d WHERE rn > 1);
"#;
db.execute(Statement::from_string(DatabaseBackend::Sqlite, sql.to_string())).await?;
Ok(())
}
}
}
async fn down(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
// 数据清洗不可逆
Ok(())
}
}

View File

@ -0,0 +1,50 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(FlowRunLogs::Table)
.if_not_exists()
.col(ColumnDef::new(FlowRunLogs::Id).big_integer().not_null().auto_increment().primary_key())
.col(ColumnDef::new(FlowRunLogs::FlowId).string_len(64).not_null())
.col(ColumnDef::new(FlowRunLogs::Input).text().null())
.col(ColumnDef::new(FlowRunLogs::Output).text().null())
.col(ColumnDef::new(FlowRunLogs::Ok).boolean().not_null().default(false))
.col(ColumnDef::new(FlowRunLogs::Logs).text().null())
.col(ColumnDef::new(FlowRunLogs::UserId).big_integer().null())
.col(ColumnDef::new(FlowRunLogs::Username).string().null())
.col(ColumnDef::new(FlowRunLogs::StartedAt).timestamp().not_null().default(Expr::current_timestamp()))
.col(ColumnDef::new(FlowRunLogs::DurationMs).big_integer().not_null().default(0))
.col(ColumnDef::new(FlowRunLogs::CreatedAt).timestamp().not_null().default(Expr::current_timestamp()))
.to_owned()
)
.await?;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager.drop_table(Table::drop().table(FlowRunLogs::Table).to_owned()).await
}
}
#[derive(Iden)]
enum FlowRunLogs {
Table,
Id,
FlowId,
Input,
Output,
Ok,
Logs,
UserId,
Username,
StartedAt,
DurationMs,
CreatedAt,
}

View File

@ -0,0 +1,37 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(FlowRunLogs::Table)
.add_column(ColumnDef::new(FlowRunLogs::FlowCode).string().null())
.to_owned()
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(FlowRunLogs::Table)
.drop_column(FlowRunLogs::FlowCode)
.to_owned()
)
.await
}
}
#[derive(DeriveIden)]
enum FlowRunLogs {
Table,
#[sea_orm(iden = "flow_run_logs")]
__N, // dummy to ensure table name when not default; but using Table alias is standard
FlowCode,
}

View File

@ -0,0 +1,42 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(ScheduleJobs::Table)
.if_not_exists()
.col(ColumnDef::new(ScheduleJobs::Id).string_len(64).not_null().primary_key())
.col(ColumnDef::new(ScheduleJobs::Name).string().not_null())
.col(ColumnDef::new(ScheduleJobs::CronExpr).string().not_null())
.col(ColumnDef::new(ScheduleJobs::Enabled).boolean().not_null().default(false))
.col(ColumnDef::new(ScheduleJobs::FlowCode).string().not_null())
.col(ColumnDef::new(ScheduleJobs::CreatedAt).timestamp().not_null().default(Expr::current_timestamp()))
.col(ColumnDef::new(ScheduleJobs::UpdatedAt).timestamp().not_null().default(Expr::current_timestamp()))
.to_owned()
)
.await?;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager.drop_table(Table::drop().table(ScheduleJobs::Table).to_owned()).await
}
}
#[derive(Iden)]
enum ScheduleJobs {
Table,
Id,
Name,
CronExpr,
Enabled,
FlowCode,
CreatedAt,
UpdatedAt,
}

View File

@ -0,0 +1,93 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
// 覆盖常用排序与过滤的索引
// 1) 无过滤ORDER BY started_at DESC, id DESC 使用 (started_at, id)
manager
.create_index(
Index::create()
.if_not_exists()
.name("idx_flow_run_logs_started_at_id")
.table(FlowRunLogs::Table)
.col(FlowRunLogs::StartedAt)
.col(FlowRunLogs::Id)
.to_owned(),
)
.await?;
// 2) flow_id = ? 且 ORDER BY started_at DESC, id DESC 使用 (flow_id, started_at, id)
manager
.create_index(
Index::create()
.if_not_exists()
.name("idx_flow_run_logs_flow_id_started_at_id")
.table(FlowRunLogs::Table)
.col(FlowRunLogs::FlowId)
.col(FlowRunLogs::StartedAt)
.col(FlowRunLogs::Id)
.to_owned(),
)
.await?;
// 3) flow_code = ? 且 ORDER BY started_at DESC, id DESC 使用 (flow_code, started_at, id)
manager
.create_index(
Index::create()
.if_not_exists()
.name("idx_flow_run_logs_flow_code_started_at_id")
.table(FlowRunLogs::Table)
.col(FlowRunLogs::FlowCode)
.col(FlowRunLogs::StartedAt)
.col(FlowRunLogs::Id)
.to_owned(),
)
.await?;
// 4) ok = ? 且 ORDER BY started_at DESC, id DESC 使用 (ok, started_at, id)
manager
.create_index(
Index::create()
.if_not_exists()
.name("idx_flow_run_logs_ok_started_at_id")
.table(FlowRunLogs::Table)
.col(FlowRunLogs::Ok)
.col(FlowRunLogs::StartedAt)
.col(FlowRunLogs::Id)
.to_owned(),
)
.await?;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_index(Index::drop().name("idx_flow_run_logs_started_at_id").table(FlowRunLogs::Table).to_owned())
.await?;
manager
.drop_index(Index::drop().name("idx_flow_run_logs_flow_id_started_at_id").table(FlowRunLogs::Table).to_owned())
.await?;
manager
.drop_index(Index::drop().name("idx_flow_run_logs_flow_code_started_at_id").table(FlowRunLogs::Table).to_owned())
.await?;
manager
.drop_index(Index::drop().name("idx_flow_run_logs_ok_started_at_id").table(FlowRunLogs::Table).to_owned())
.await
}
}
#[derive(DeriveIden)]
enum FlowRunLogs {
#[sea_orm(iden = "flow_run_logs")]
Table,
Id,
FlowId,
FlowCode,
StartedAt,
Ok,
}

View File

@ -0,0 +1,40 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
// 将 flow_id 从 VARCHAR(64) 改为 BIGINT以匹配实体模型与查询参数类型
manager
.alter_table(
Table::alter()
.table(FlowRunLogs::Table)
.modify_column(ColumnDef::new(FlowRunLogs::FlowId).big_integer().not_null())
.to_owned(),
)
.await?
;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
// 回滚:将 flow_id 改回 VARCHAR(64)
manager
.alter_table(
Table::alter()
.table(FlowRunLogs::Table)
.modify_column(ColumnDef::new(FlowRunLogs::FlowId).string_len(64).not_null())
.to_owned(),
)
.await
}
}
#[derive(DeriveIden)]
enum FlowRunLogs {
#[sea_orm(iden = "flow_run_logs")]
Table,
FlowId,
}

View File

@ -1,5 +1,7 @@
use sea_orm::{ConnectOptions, Database, DatabaseConnection}; use sea_orm::{ConnectOptions, Database, DatabaseConnection};
use std::time::Duration; use std::time::Duration;
use once_cell::sync::OnceCell;
use crate::error::AppError;
pub type Db = DatabaseConnection; pub type Db = DatabaseConnection;
@ -13,3 +15,19 @@ pub async fn init_db() -> anyhow::Result<Db> {
let conn = Database::connect(opt).await?; let conn = Database::connect(opt).await?;
Ok(conn) Ok(conn)
} }
// ===== Global DB connection (OnceCell) =====
static GLOBAL_DB: OnceCell<Db> = OnceCell::new();
pub fn set_db(conn: Db) -> Result<(), AppError> {
GLOBAL_DB
.set(conn)
.map_err(|_| AppError::Anyhow(anyhow::anyhow!("Db already initialized")))
}
#[allow(dead_code)]
pub fn get_db() -> Result<&'static Db, AppError> {
GLOBAL_DB
.get()
.ok_or_else(|| AppError::Anyhow(anyhow::anyhow!("Db not initialized")))
}

View File

@ -8,7 +8,10 @@ pub enum AppError {
#[error("forbidden")] Forbidden, #[error("forbidden")] Forbidden,
#[error("forbidden: {0}")] ForbiddenMsg(String), #[error("forbidden: {0}")] ForbiddenMsg(String),
#[error("bad request: {0}")] BadRequest(String), #[error("bad request: {0}")] BadRequest(String),
#[error("conflict: {0}")] Conflict(String),
#[error("not found")] NotFound, #[error("not found")] NotFound,
// 新增:允许在个别接口(如同步执行运行)明确返回后端的错误信息
#[error("internal error: {0}")] InternalMsg(String),
#[error(transparent)] Db(#[from] sea_orm::DbErr), #[error(transparent)] Db(#[from] sea_orm::DbErr),
#[error(transparent)] Jwt(#[from] jsonwebtoken::errors::Error), #[error(transparent)] Jwt(#[from] jsonwebtoken::errors::Error),
#[error(transparent)] Anyhow(#[from] anyhow::Error), #[error(transparent)] Anyhow(#[from] anyhow::Error),
@ -22,7 +25,12 @@ impl IntoResponse for AppError {
AppError::Forbidden => (StatusCode::FORBIDDEN, 403, "forbidden".to_string()), AppError::Forbidden => (StatusCode::FORBIDDEN, 403, "forbidden".to_string()),
AppError::ForbiddenMsg(m) => (StatusCode::FORBIDDEN, 403, m.clone()), AppError::ForbiddenMsg(m) => (StatusCode::FORBIDDEN, 403, m.clone()),
AppError::BadRequest(m) => (StatusCode::BAD_REQUEST, 400, m.clone()), AppError::BadRequest(m) => (StatusCode::BAD_REQUEST, 400, m.clone()),
AppError::Conflict(m) => (StatusCode::CONFLICT, 409, m.clone()),
AppError::NotFound => (StatusCode::NOT_FOUND, 404, "not found".into()), AppError::NotFound => (StatusCode::NOT_FOUND, 404, "not found".into()),
// Treat JWT decode/validation errors as 401 to allow frontend to refresh tokens
AppError::Jwt(_) => (StatusCode::UNAUTHORIZED, 401, "unauthorized".to_string()),
// 新增:对 InternalMsg 直接以 500 返回详细消息
AppError::InternalMsg(m) => (StatusCode::INTERNAL_SERVER_ERROR, 500, m.clone()),
_ => (StatusCode::INTERNAL_SERVER_ERROR, 500, "internal error".into()), _ => (StatusCode::INTERNAL_SERVER_ERROR, 500, "internal error".into()),
}; };
(status, Json(ApiResponse::<serde_json::Value> { code, message: msg, data: None })).into_response() (status, Json(ApiResponse::<serde_json::Value> { code, message: msg, data: None })).into_response()

View File

@ -0,0 +1,42 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct FlowContext {
#[serde(default)]
pub data: serde_json::Value,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum ExecutionMode {
#[serde(rename = "sync")] Sync,
#[serde(rename = "async")] AsyncFireAndForget,
}
impl Default for ExecutionMode { fn default() -> Self { ExecutionMode::Sync } }
// 新增:流式事件(用于 SSE
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")] // 带判别字段,便于前端识别事件类型
pub enum StreamEvent {
#[serde(rename = "node")]
Node { node_id: String, logs: Vec<String>, ctx: serde_json::Value },
#[serde(rename = "done")]
Done { ok: bool, ctx: serde_json::Value, logs: Vec<String> },
#[serde(rename = "error")]
Error { message: String },
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DriveOptions {
#[serde(default)]
pub max_steps: usize,
#[serde(default)]
pub execution_mode: ExecutionMode,
// 新增:事件通道(仅运行时使用,不做序列化/反序列化)
#[serde(default, skip_serializing, skip_deserializing)]
pub event_tx: Option<tokio::sync::mpsc::Sender<StreamEvent>>,
}
impl Default for DriveOptions {
fn default() -> Self { Self { max_steps: 10_000, execution_mode: ExecutionMode::Sync, event_tx: None } }
}

View File

@ -0,0 +1,44 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash, Default)]
pub struct NodeId(pub String);
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum NodeKind {
Start,
End,
Task,
Condition,
}
impl Default for NodeKind {
fn default() -> Self { Self::Task }
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct NodeDef {
pub id: NodeId,
#[serde(default)]
pub kind: NodeKind,
#[serde(default)]
pub name: String,
#[serde(default)]
pub task: Option<String>, // 绑定的任务组件标识
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct LinkDef {
pub from: NodeId,
pub to: NodeId,
#[serde(default)]
pub condition: Option<String>, // 条件脚本,返回 bool
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ChainDef {
#[serde(default)]
pub name: String,
pub nodes: Vec<NodeDef>,
#[serde(default)]
pub links: Vec<LinkDef>,
}

365
backend/src/flow/dsl.rs Normal file
View File

@ -0,0 +1,365 @@
//! 模块:流程 DSL 与自由布局 Design JSON 的解析、校验与构建。
//! 主要内容:
//! - FlowDSL/NodeDSL/EdgeDSL较为“表述性”的简化 DSL 结构(用于外部接口/入库)。
//! - DesignSyntax/NodeSyntax/EdgeSyntax与前端自由布局 JSON 对齐的结构(含 source_port_id 等)。
//! - validate_design基础校验节点 ID 唯一、至少包含一个 start 与一个 end、边的引用合法
//! - build_chain_from_design将自由布局 JSON 转换为内部 ChainDef含条件节点 AND 组装等启发式与兼容逻辑)。
//! - chain_from_design_json统一入口支持字符串/对象两种输入,做兼容字段回填后再校验并构建。
//! 说明:尽量保持向后兼容;在条件节点的出边组装上采用启发式(例如:单出边 + 多条件 => 组装为 AND 条件组)。
use serde::{Deserialize, Serialize};
use serde_json::Value;
use anyhow::bail;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FlowDSL {
#[serde(default)]
/// 流程名称(可选)
pub name: String,
#[serde(default, alias = "executionMode")]
/// 执行模式(兼容前端 executionModesync/async目前仅占位
pub execution_mode: Option<String>,
/// 节点列表(按声明顺序)
pub nodes: Vec<NodeDSL>,
#[serde(default)]
/// 边列表from -> to可选 condition
pub edges: Vec<EdgeDSL>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NodeDSL {
/// 节点唯一 ID字符串
pub id: String,
#[serde(default)]
/// 节点类型start / end / task / condition开始/结束/任务/条件)
pub kind: String, // 节点类型start / end / task / condition开始/结束/任务/条件)
#[serde(default)]
/// 节点显示名称(可选)
pub name: String,
#[serde(default)]
/// 任务标识(绑定执行器),如 http/db/variable/script_*(可选)
pub task: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EdgeDSL {
/// 起点节点 ID别名source/from
#[serde(alias = "source", alias = "from", rename = "from")]
pub from: String,
/// 终点节点 ID别名target/to
#[serde(alias = "target", alias = "to", rename = "to")]
pub to: String,
#[serde(default)]
/// 条件表达式(字符串):
/// - 若为 JSON 字符串(以 { 或 [ 开头),则按 JSON 条件集合进行求值;
/// - 否则按 Rhai 表达式求值;
/// - 空字符串/None 表示无条件。
pub condition: Option<String>,
}
impl From<FlowDSL> for super::domain::ChainDef {
/// 将简化 DSL 转换为内部 ChainDef
/// - kind 映射start/end/condition/其他->task支持 decision 别名 -> condition。
/// - 直接搬运 edges 的 from/to/condition。
fn from(v: FlowDSL) -> Self {
super::domain::ChainDef {
name: v.name,
nodes: v
.nodes
.into_iter()
.map(|n| super::domain::NodeDef {
id: super::domain::NodeId(n.id),
kind: match n.kind.to_lowercase().as_str() {
"start" => super::domain::NodeKind::Start,
"end" => super::domain::NodeKind::End,
"decision" | "condition" => super::domain::NodeKind::Condition,
_ => super::domain::NodeKind::Task,
},
name: n.name,
task: n.task,
})
.collect(),
links: v
.edges
.into_iter()
.map(|e| super::domain::LinkDef {
from: super::domain::NodeId(e.from),
to: super::domain::NodeId(e.to),
condition: e.condition,
})
.collect(),
}
}
}
// ===== 将 design_json前端自由布局 JSON解析为 ChainDef =====
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DesignSyntax {
#[serde(default)]
/// 设计名称(可选)
pub name: String,
#[serde(default)]
/// 节点集合(自由布局)
pub nodes: Vec<NodeSyntax>,
#[serde(default)]
/// 边集合(自由布局)
pub edges: Vec<EdgeSyntax>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NodeSyntax {
/// 节点 ID
pub id: String,
#[serde(rename = "type", default)]
/// 前端类型start | end | condition | http | db | task | script_*(用于推断具体执行器)
pub kind: String, // 取值: start | end | condition | http | db | task开始/结束/条件/HTTP/数据库/通用任务)
#[serde(default)]
/// 节点附加数据title/conditions/scripts 等
pub data: serde_json::Value,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EdgeSyntax {
/// 起点(兼容 sourceNodeID/source/from
#[serde(alias = "sourceNodeID", alias = "source", alias = "from")]
pub from: String,
/// 终点(兼容 targetNodeID/target/to
#[serde(alias = "targetNodeID", alias = "target", alias = "to")]
pub to: String,
#[serde(default)]
/// 源端口 ID用于条件节点端口到条件 key 的兼容映射;
/// 特殊值 and/all/group/true 表示将节点内所有 conditions 的 value 组装为 AND 组。
pub source_port_id: Option<String>,
}
/// 设计级别校验:
/// - 节点 ID 唯一且非空;
/// - 至少一个 start 与一个 end
/// - 边的 from/to 必须指向已知节点。
fn validate_design(design: &DesignSyntax) -> anyhow::Result<()> {
use std::collections::HashSet;
let mut ids = HashSet::new();
for n in &design.nodes {
// 节点 ID 不能为空,且在一个流程内必须唯一
if n.id.trim().is_empty() { bail!("node id is required"); }
if !ids.insert(n.id.clone()) { bail!("duplicate node id: {}", n.id); }
}
// 确保至少包含一个开始节点与一个结束节点
let mut start = 0usize;
let mut end = 0usize;
for n in &design.nodes {
match n.kind.as_str() {
"start" => start += 1,
"end" => end += 1,
_ => {}
}
}
anyhow::ensure!(start >= 1, "flow must have at least one start node");
anyhow::ensure!(end >= 1, "flow must have at least one end node");
// 校验边的引用合法性from/to 必须存在且指向已知节点)
for e in &design.edges {
if e.from.is_empty() || e.to.is_empty() { bail!("edge must have both from and to"); }
if !ids.contains(&e.from) { bail!("edge from references unknown node: {}", e.from); }
if !ids.contains(&e.to) { bail!("edge to references unknown node: {}", e.to); }
}
Ok(())
}
/// 将自由布局 DesignSyntax 转换为内部 ChainDef
/// - 节点:推断 kind/name/task含 scripts 与 inline script/expr 的兼容);
/// - 边:
/// * 条件节点:支持 source_port_id 到 data.conditions 的旧版映射;
/// * 当 source_port_id 为空或为 and/all/group/true取 conditions 的 value 组成 AND 组;
/// * 启发式:若条件节点仅一条出边且包含多个 conditions即便 source_port_id 指向具体 key也回退为 AND 组;
/// * 非条件节点:不处理条件。
fn build_chain_from_design(design: &DesignSyntax) -> anyhow::Result<super::domain::ChainDef> {
use super::domain::{ChainDef, NodeDef, NodeId, NodeKind, LinkDef};
let mut nodes: Vec<NodeDef> = Vec::new();
for n in &design.nodes {
let kind = match n.kind.as_str() {
"start" => NodeKind::Start,
"end" => NodeKind::End,
"condition" => NodeKind::Condition,
_ => NodeKind::Task,
};
// 从节点 data.title 读取名称,若不存在则为空字符串
let name = n.data.get("title").and_then(|v| v.as_str()).unwrap_or("").to_string();
// 将可执行类型映射到任务标识(用于绑定任务实现)
let mut task = match n.kind.as_str() {
"http" => Some("http".to_string()),
"db" => Some("db".to_string()),
"variable" => Some("variable".to_string()),
// 脚本节点:按语言拆分
"script" | "expr" | "script_rhai" => Some("script_rhai".to_string()),
"script_js" | "javascript" | "js" => Some("script_js".to_string()),
"script_python" | "python" | "py" => Some("script_python".to_string()),
_ => None,
};
// 兼容/推断:根据 data.scripts.* 或 inline script/expr 推断脚本类型
if task.is_none() {
if let Some(obj) = n.data.get("scripts").and_then(|v| v.as_object()) {
if obj.get("js").is_some() { task = Some("script_js".to_string()); }
else if obj.get("python").is_some() { task = Some("script_python".to_string()); }
else if obj.get("rhai").is_some() { task = Some("script_rhai".to_string()); }
}
}
if task.is_none() {
if n.data.get("script").is_some() || n.data.get("expr").is_some() {
task = Some("script_rhai".to_string());
}
}
nodes.push(NodeDef { id: NodeId(n.id.clone()), kind, name, task });
}
// 预统计每个 from 节点的出边数量,用于启发式:条件节点仅一条出边且包含多个条件时,默认组装 AND
use std::collections::HashMap;
let mut out_deg: HashMap<&str, usize> = HashMap::new();
for e in &design.edges { *out_deg.entry(e.from.as_str()).or_insert(0) += 1; }
// 兼容旧版的基于 sourcePortID 的条件编码:
// 若边上带有 source_port_id则在源节点条件节点的 data.conditions 中查找同 key 的条件值,作为边的 condition
// 新增:当 source_port_id 为空,或取值为 and/all/group/true 时,将该条件节点内所有 conditions 的 value 组成数组,按 AND 语义挂到该边上
// 进一步新增启发式:当源为条件节点且其出边仅有 1 条,且该节点内包含多个 conditions则即便 source_port_id 指向了具体 key也按 AND 组装
let mut links: Vec<LinkDef> = Vec::new();
for e in &design.edges {
let mut cond: Option<String> = None;
if let Some(src) = design.nodes.iter().find(|x| x.id == e.from) {
if src.kind.as_str() == "condition" {
let conds = src.data.get("conditions").and_then(|v| v.as_array());
let conds_len = conds.map(|a| a.len()).unwrap_or(0);
let only_one_out = out_deg.get(src.id.as_str()).copied().unwrap_or(0) == 1;
match &e.source_port_id {
Some(spid) => {
let spid_l = spid.to_lowercase();
let mut want_group = spid_l == "and" || spid_l == "all" || spid_l == "group" || spid_l == "true";
if !want_group && only_one_out && conds_len > 1 {
// 启发式回退:单出边 + 多条件 => 组装为 AND 组
want_group = true;
}
if want_group {
if let Some(arr) = conds {
let mut values: Vec<Value> = Vec::new();
for item in arr { if let Some(v) = item.get("value").cloned() { values.push(v); } }
if !values.is_empty() { if let Ok(s) = serde_json::to_string(&Value::Array(values)) { cond = Some(s); } }
}
} else {
if let Some(arr) = conds {
if let Some(item) = arr.iter().find(|c| c.get("key").and_then(|v| v.as_str()) == Some(spid.as_str())) {
if let Some(val) = item.get("value") { if let Ok(s) = serde_json::to_string(val) { cond = Some(s); } }
}
}
}
}
None => {
// 没有指定具体端口:将该节点的全部条件组成 AND 组
if let Some(arr) = conds {
let mut values: Vec<Value> = Vec::new();
for item in arr { if let Some(v) = item.get("value").cloned() { values.push(v); } }
if !values.is_empty() { if let Ok(s) = serde_json::to_string(&Value::Array(values)) { cond = Some(s); } }
}
}
}
}
}
links.push(LinkDef { from: NodeId(e.from.clone()), to: NodeId(e.to.clone()), condition: cond });
}
Ok(ChainDef { name: design.name.clone(), nodes, links })
}
// Rewire external API to typed syntax -> validate -> build
pub fn chain_from_design_json(design: &Value) -> anyhow::Result<super::domain::ChainDef> {
// Accept both JSON object and stringified JSON
let parsed: Option<Value> = match design {
Value::String(s) => serde_json::from_str::<Value>(s).ok(),
_ => None,
};
let design = parsed.as_ref().unwrap_or(design);
let mut syntax: DesignSyntax = serde_json::from_value(design.clone())?;
// fill source_port_id for backward compat if edges carry sourcePortID
if let Some(arr) = design.get("edges").and_then(|v| v.as_array()) {
for (i, e) in arr.iter().enumerate() {
if let Some(spid) = e.get("sourcePortID").and_then(|v| v.as_str()) {
if i < syntax.edges.len() {
syntax.edges[i].source_port_id = Some(spid.to_string());
}
}
}
}
validate_design(&syntax)?;
build_chain_from_design(&syntax)
}
#[cfg(test)]
mod tests {
use super::*;
use serde_json::json;
#[test]
fn build_chain_ok_with_start_end_and_tasks() {
let design = json!({
"name": "demo",
"nodes": [
{"id": "n1", "type": "start", "data": {"title": "Start"}},
{"id": "n2", "type": "http", "data": {"title": "HTTP"}},
{"id": "n3", "type": "end", "data": {"title": "End"}}
],
"edges": [
{"from": "n1", "to": "n2"},
{"from": "n2", "to": "n3"}
]
});
let chain = chain_from_design_json(&design).unwrap();
assert_eq!(chain.nodes.len(), 3);
assert_eq!(chain.links.len(), 2);
}
#[test]
fn duplicate_node_id_should_error() {
let design = json!({
"name": "demo",
"nodes": [
{"id": "n1", "type": "start"},
{"id": "n1", "type": "end"}
],
"edges": []
});
let err = chain_from_design_json(&design).unwrap_err();
assert!(err.to_string().contains("duplicate node id"));
}
#[test]
fn missing_start_or_end_should_error() {
let design = json!({
"name": "demo",
"nodes": [
{"id": "n1", "type": "start"}
],
"edges": []
});
let err = chain_from_design_json(&design).unwrap_err();
assert!(err.to_string().contains("at least one end"));
}
#[test]
fn edge_ref_unknown_node_should_error() {
let design = json!({
"name": "demo",
"nodes": [
{"id": "n1", "type": "start"},
{"id": "n2", "type": "end"}
],
"edges": [
{"from": "n1", "to": "n3"}
]
});
let err = chain_from_design_json(&design).unwrap_err();
assert!(err.to_string().contains("edge to references unknown node"));
}
}

535
backend/src/flow/engine.rs Normal file
View File

@ -0,0 +1,535 @@
//! 流程执行引擎engine.rs驱动 ChainDef 流程图,支持同步/异步任务、条件路由、并发分支与 SSE 推送。
use std::cell::RefCell;
use std::collections::HashMap;
use std::time::Instant;
use crate::flow::executors::condition::eval_condition_json;
use super::{
context::{DriveOptions, ExecutionMode},
domain::{ChainDef, NodeKind},
task::TaskRegistry,
};
use futures::future::join_all;
use regex::Regex;
use rhai::{AST, Engine};
use tokio::sync::{Mutex, RwLock};
use tracing::info;
// 结构体:紧随 use
pub struct FlowEngine {
pub tasks: TaskRegistry,
}
#[derive(Debug, Clone)]
pub struct DriveError {
pub node_id: String,
pub ctx: serde_json::Value,
pub logs: Vec<String>,
pub message: String,
}
// === 表达式评估支持thread_local 引擎与 AST 缓存,避免全局 Sync/Send 限制 ===
// 模块流程执行引擎engine.rs
// 作用:驱动 ChainDef 流程图,支持:
// - 同步/异步Fire-and-Forget任务执行
// - 条件路由Rhai 表达式与 JSON 条件)与无条件回退
// - 并发分支 fan-out 与 join_all 等待
// - SSE 实时事件推送(逐行增量 + 节点级切片)
// 设计要点:
// - 表达式执行使用 thread_local 的 Rhai Engine 与 AST 缓存,避免全局 Send/Sync 限制
// - 共享上下文使用 RwLock 包裹 serde_json::Value日志聚合使用 Mutex<Vec<String>>
// - 不做冲突校验:允许并发修改;最后写回/写入按代码路径覆盖
//
fn regex_match(s: &str, pat: &str) -> bool {
Regex::new(pat).map(|re| re.is_match(s)).unwrap_or(false)
}
// 常用字符串函数,便于在表达式中直接调用(函数式写法)
fn contains(s: &str, sub: &str) -> bool { s.contains(sub) }
fn starts_with(s: &str, prefix: &str) -> bool { s.starts_with(prefix) }
fn ends_with(s: &str, suffix: &str) -> bool { s.ends_with(suffix) }
// 新增:判空/判不空(支持任意 Dynamic 类型)
fn is_empty(v: rhai::Dynamic) -> bool {
if v.is_unit() { return true; }
if let Some(s) = v.clone().try_cast::<rhai::ImmutableString>() {
return s.is_empty();
}
if let Some(a) = v.clone().try_cast::<rhai::Array>() {
return a.is_empty();
}
if let Some(m) = v.clone().try_cast::<rhai::Map>() {
return m.is_empty();
}
false
}
fn not_empty(v: rhai::Dynamic) -> bool { !is_empty(v) }
thread_local! {
static RHIA_ENGINE: RefCell<Engine> = RefCell::new({
let mut eng = Engine::new();
// 限制执行步数,防止复杂表达式消耗过多计算资源
eng.set_max_operations(10_000_000);
// 严格变量模式,避免拼写错误导致静默为 null
eng.set_strict_variables(true);
// 注册常用工具函数
eng.register_fn("regex_match", regex_match);
eng.register_fn("contains", contains);
eng.register_fn("starts_with", starts_with);
eng.register_fn("ends_with", ends_with);
// 新增:注册判空/判不空函数(既可函数式调用,也可方法式调用)
eng.register_fn("is_empty", is_empty);
eng.register_fn("not_empty", not_empty);
eng
});
// 简单的 AST 缓存:以表达式字符串为键存储编译结果(线程本地)
static AST_CACHE: RefCell<HashMap<String, AST>> = RefCell::new(HashMap::new());
}
// 评估 Rhai 表达式为 bool提供 ctx 变量serde_json::Value
fn eval_rhai_expr_bool(expr: &str, ctx: &serde_json::Value) -> bool {
// 构造作用域并注入 ctx
let mut scope = rhai::Scope::new();
let dyn_ctx = match rhai::serde::to_dynamic(ctx.clone()) { Ok(d) => d, Err(_) => rhai::Dynamic::UNIT };
scope.push_dynamic("ctx", dyn_ctx);
// 先从缓存读取 AST未命中则编译并写入缓存然后执行
let cached = AST_CACHE.with(|c| c.borrow().get(expr).cloned());
if let Some(ast) = cached {
return RHIA_ENGINE.with(|eng| eng.borrow().eval_ast_with_scope::<bool>(&mut scope, &ast).unwrap_or(false));
}
let compiled = RHIA_ENGINE.with(|eng| eng.borrow().compile_with_scope(&mut scope, expr));
match compiled {
Ok(ast) => {
// 简单容量控制:超过 1024 条时清空,避免无限增长
AST_CACHE.with(|c| {
let mut cache = c.borrow_mut();
if cache.len() > 1024 { cache.clear(); }
cache.insert(expr.to_string(), ast.clone());
});
RHIA_ENGINE.with(|eng| eng.borrow().eval_ast_with_scope::<bool>(&mut scope, &ast).unwrap_or(false))
}
Err(_) => false,
}
}
// 通用:评估 Rhai 表达式并转换为 serde_json::Value失败返回错误
#[derive(Debug, Clone)]
pub enum RhaiExecError {
Compile { message: String },
Runtime { message: String },
Serde { message: String },
}
impl std::fmt::Display for RhaiExecError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
RhaiExecError::Compile { message } => write!(f, "compile error: {}", message),
RhaiExecError::Runtime { message } => write!(f, "runtime error: {}", message),
RhaiExecError::Serde { message } => write!(f, "serde error: {}", message),
}
}
}
impl std::error::Error for RhaiExecError {}
pub(crate) fn eval_rhai_expr_json(expr: &str, ctx: &serde_json::Value) -> Result<serde_json::Value, RhaiExecError> {
// 构造作用域并注入 ctx
let mut scope = rhai::Scope::new();
let dyn_ctx = match rhai::serde::to_dynamic(ctx.clone()) { Ok(d) => d, Err(_) => rhai::Dynamic::UNIT };
scope.push_dynamic("ctx", dyn_ctx);
// 先从缓存读取 AST未命中则编译并写入缓存然后执行
let cached = AST_CACHE.with(|c| c.borrow().get(expr).cloned());
let eval = |ast: &AST, scope: &mut rhai::Scope| -> Result<serde_json::Value, RhaiExecError> {
RHIA_ENGINE.with(|eng| {
eng.borrow()
.eval_ast_with_scope::<rhai::Dynamic>(scope, ast)
.map_err(|e| RhaiExecError::Runtime { message: e.to_string() })
.and_then(|d| rhai::serde::from_dynamic(&d).map_err(|e| RhaiExecError::Serde { message: e.to_string() }))
})
};
if let Some(ast) = cached {
return eval(&ast, &mut scope);
}
let compiled = RHIA_ENGINE.with(|eng| eng.borrow().compile_with_scope(&mut scope, expr));
match compiled {
Ok(ast) => {
AST_CACHE.with(|c| {
let mut cache = c.borrow_mut();
if cache.len() > 1024 { cache.clear(); }
cache.insert(expr.to_string(), ast.clone());
});
eval(&ast, &mut scope)
}
Err(e) => Err(RhaiExecError::Compile { message: e.to_string() }),
}
}
impl FlowEngine {
pub fn new(tasks: TaskRegistry) -> Self { Self { tasks } }
pub fn builder() -> FlowEngineBuilder { FlowEngineBuilder::default() }
pub async fn drive(&self, chain: &ChainDef, ctx: serde_json::Value, opts: DriveOptions) -> anyhow::Result<(serde_json::Value, Vec<String>)> {
// 1) 选取起点:优先 Start否则入度为 0再否则第一个节点
// 查找 start优先 Start 节点;否则选择入度为 0 的第一个节点;再否则回退第一个节点
let start = if let Some(n) = chain
.nodes
.iter()
.find(|n| matches!(n.kind, NodeKind::Start))
{
n.id.0.clone()
} else {
// 计算入度
let mut indeg: HashMap<&str, usize> = HashMap::new();
for n in &chain.nodes { indeg.entry(n.id.0.as_str()).or_insert(0); }
for l in &chain.links { *indeg.entry(l.to.0.as_str()).or_insert(0) += 1; }
if let Some(n) = chain.nodes.iter().find(|n| indeg.get(n.id.0.as_str()).copied().unwrap_or(0) == 0) {
n.id.0.clone()
} else {
chain
.nodes
.first()
.ok_or_else(|| anyhow::anyhow!("empty chain"))?
.id
.0
.clone()
}
};
// 2) 构建可并发共享的数据结构
// 拷贝节点与边(保持原有顺序)到拥有所有权的 HashMap供并发分支安全使用
let node_map_owned: HashMap<String, super::domain::NodeDef> = chain.nodes.iter().map(|n| (n.id.0.clone(), n.clone())).collect();
let mut adj_owned: HashMap<String, Vec<super::domain::LinkDef>> = HashMap::new();
for l in &chain.links { adj_owned.entry(l.from.0.clone()).or_default().push(l.clone()); }
let node_map = std::sync::Arc::new(node_map_owned);
let adj = std::sync::Arc::new(adj_owned);
// 共享上下文(允许并发修改,程序端不做冲突校验)
let shared_ctx = std::sync::Arc::new(RwLock::new(ctx));
// 共享日志聚合
let logs_shared = std::sync::Arc::new(Mutex::new(Vec::<String>::new()));
// 3) 并发驱动从起点开始
let tasks = self.tasks.clone();
drive_from(tasks, node_map.clone(), adj.clone(), start, shared_ctx.clone(), opts.clone(), logs_shared.clone()).await?;
// 4) 汇总返回
let final_ctx = { shared_ctx.read().await.clone() };
let logs = { logs_shared.lock().await.clone() };
Ok((final_ctx, logs))
}
}
// 从指定节点开始驱动,遇到多条满足条件的边时:
// - 第一条在当前任务内继续
// - 其余分支并行 spawn等待全部分支执行完毕后返回
async fn drive_from(
tasks: TaskRegistry,
node_map: std::sync::Arc<HashMap<String, super::domain::NodeDef>>,
adj: std::sync::Arc<HashMap<String, Vec<super::domain::LinkDef>>>,
start: String,
ctx: std::sync::Arc<RwLock<serde_json::Value>>, // 共享上下文(并发写入通过写锁串行化,不做冲突校验)
opts: DriveOptions,
logs: std::sync::Arc<Mutex<Vec<String>>>,
) -> anyhow::Result<()> {
let mut current = start;
let mut steps = 0usize;
loop {
if steps >= opts.max_steps { break; }
steps += 1;
// 读取节点
let node = match node_map.get(&current) { Some(n) => n, None => break };
// 进入节点:打点
let node_id_str = node.id.0.clone();
let node_start = Instant::now();
// 进入节点前记录当前日志长度,便于节点结束时做切片
let pre_len = { logs.lock().await.len() };
// 在每次追加日志时同步发送一条增量 SSE 事件(仅 1 行日志),以提升实时性
// push_and_emit
// - 先将单行日志 push 到共享日志
// - 若存在 SSE 通道,截取上下文快照并发送单行增量事件
async fn push_and_emit(
logs: &std::sync::Arc<tokio::sync::Mutex<Vec<String>>>,
opts: &super::context::DriveOptions,
node_id: &str,
ctx: &std::sync::Arc<tokio::sync::RwLock<serde_json::Value>>,
msg: String,
) {
{
let mut lg = logs.lock().await;
lg.push(msg.clone());
}
if let Some(tx) = opts.event_tx.as_ref() {
let ctx_snapshot = { ctx.read().await.clone() };
crate::middlewares::sse::emit_node(&tx, node_id.to_string(), vec![msg], ctx_snapshot).await;
}
}
// enter 节点也实时推送
push_and_emit(&logs, &opts, &node_id_str, &ctx, format!("enter node: {}", node.id.0)).await;
info!(target: "udmin.flow", "enter node: {}", node.id.0);
// 执行任务
if let Some(task_name) = &node.task {
if let Some(task) = tasks.get(task_name) {
match opts.execution_mode {
ExecutionMode::Sync => {
// 使用快照执行,结束后整体写回(允许最后写入覆盖并发修改;程序端不做冲突校验)
let mut local_ctx = { ctx.read().await.clone() };
match task.execute(&node.id, node, &mut local_ctx).await {
Ok(_) => {
{ let mut w = ctx.write().await; *w = local_ctx; }
push_and_emit(&logs, &opts, &node_id_str, &ctx, format!("exec task: {} (sync)", task_name)).await;
info!(target: "udmin.flow", "exec task: {} (sync)", task_name);
}
Err(e) => {
let err_msg = format!("task error: {}: {}", task_name, e);
push_and_emit(&logs, &opts, &node_id_str, &ctx, err_msg.clone()).await;
// 捕获快照并返回 DriveError
let ctx_snapshot = { ctx.read().await.clone() };
let logs_snapshot = { logs.lock().await.clone() };
return Err(anyhow::Error::new(DriveError { node_id: node_id_str.clone(), ctx: ctx_snapshot, logs: logs_snapshot, message: err_msg }));
}
}
}
ExecutionMode::AsyncFireAndForget => {
// fire-and-forget基于快照执行不写回共享 ctx变量任务除外做有界差异写回
let task_ctx = { ctx.read().await.clone() };
let task_arc = task.clone();
let name_for_log = task_name.clone();
let node_id = node.id.clone();
let node_def = node.clone();
let logs_clone = logs.clone();
let ctx_clone = ctx.clone();
let event_tx_opt = opts.event_tx.clone();
tokio::spawn(async move {
let mut c = task_ctx.clone();
let _ = task_arc.execute(&node_id, &node_def, &mut c).await;
// 对 variable 任务执行写回:将顶层新增/修改的键写回共享 ctx并移除对应 variable 节点
if node_def.task.as_deref() == Some("variable") {
// 计算顶层差异(排除 nodes仅在不同或新增时写回
let mut changed: Vec<(String, serde_json::Value)> = Vec::new();
if let (serde_json::Value::Object(before_map), serde_json::Value::Object(after_map)) = (&task_ctx, &c) {
for (k, v_after) in after_map.iter() {
if k == "nodes" { continue; }
match before_map.get(k) {
Some(v_before) if v_before == v_after => {}
_ => changed.push((k.clone(), v_after.clone())),
}
}
}
{
let mut w = ctx_clone.write().await;
if let serde_json::Value::Object(map) = &mut *w {
for (k, v) in changed.into_iter() { map.insert(k, v); }
if let Some(serde_json::Value::Object(nodes)) = map.get_mut("nodes") {
nodes.remove(node_id.0.as_str());
}
}
}
{
let mut lg = logs_clone.lock().await;
lg.push(format!("exec task done (async): {} (writeback variable)", name_for_log));
}
// 实时推送异步完成日志
if let Some(tx) = event_tx_opt.as_ref() {
let ctx_snapshot = { ctx_clone.read().await.clone() };
crate::middlewares::sse::emit_node(&tx, node_id.0.clone(), vec![format!("exec task done (async): {} (writeback variable)", name_for_log)], ctx_snapshot).await;
}
info!(target: "udmin.flow", "exec task done (async): {} (writeback variable)", name_for_log);
} else {
{
let mut lg = logs_clone.lock().await;
lg.push(format!("exec task done (async): {}", name_for_log));
}
// 实时推送异步完成日志
if let Some(tx) = event_tx_opt.as_ref() {
let ctx_snapshot = { ctx_clone.read().await.clone() };
crate::middlewares::sse::emit_node(&tx, node_id.0.clone(), vec![format!("exec task done (async): {}", name_for_log)], ctx_snapshot).await;
}
info!(target: "udmin.flow", "exec task done (async): {}", name_for_log);
}
});
push_and_emit(&logs, &opts, &node_id_str, &ctx, format!("spawn task: {} (async)", task_name)).await;
info!(target: "udmin.flow", "spawn task: {} (async)", task_name);
}
}
} else {
push_and_emit(&logs, &opts, &node_id_str, &ctx, format!("task not found: {} (skip)", task_name)).await;
info!(target: "udmin.flow", "task not found: {} (skip)", task_name);
}
}
// End 节点:记录耗时后结束
if matches!(node.kind, NodeKind::End) {
let duration = node_start.elapsed().as_millis();
push_and_emit(&logs, &opts, &node_id_str, &ctx, format!("leave node: {} {} ms", node_id_str, duration)).await;
info!(target: "udmin.flow", "leave node: {} {} ms", node_id_str, duration);
if let Some(tx) = opts.event_tx.as_ref() {
let node_logs = { let lg = logs.lock().await; lg[pre_len..].to_vec() };
let ctx_snapshot = { ctx.read().await.clone() };
crate::middlewares::sse::emit_node(&tx, node_id_str.clone(), node_logs, ctx_snapshot).await;
}
break;
}
// 选择下一批 link仅在 Condition 节点上评估条件;其他节点忽略条件,直接沿第一条边前进
let mut nexts: Vec<String> = Vec::new();
if let Some(links) = adj.get(node.id.0.as_str()) {
if matches!(node.kind, NodeKind::Condition) {
// 条件边:全部评估为真者加入 nexts空字符串条件视为无条件不在此处评估
for link in links.iter() {
if let Some(cond_str) = &link.condition {
if cond_str.trim().is_empty() {
// 空条件:视为无条件边,留待后续回退逻辑处理
info!(target: "udmin.flow", from=%node.id.0, to=%link.to.0, "condition link: empty (unconditional candidate)");
continue;
}
let trimmed = cond_str.trim_start();
let (kind, ok) = if trimmed.starts_with('{') || trimmed.starts_with('[') {
match serde_json::from_str::<serde_json::Value>(cond_str) {
Ok(v) => {
let snapshot = { ctx.read().await.clone() };
("json", eval_condition_json(&snapshot, &v).unwrap_or(false))
}
Err(_) => ("json_parse_error", false),
}
} else {
let snapshot = { ctx.read().await.clone() };
("rhai", eval_rhai_expr_bool(cond_str, &snapshot))
};
info!(target: "udmin.flow", from=%node.id.0, to=%link.to.0, cond_kind=%kind, cond_len=%cond_str.len(), result=%ok, "condition link evaluated");
if ok { nexts.push(link.to.0.clone()); }
} else {
// 无 condition 字段:视为无条件边
info!(target: "udmin.flow", from=%node.id.0, to=%link.to.0, "condition link: none (unconditional candidate)");
}
}
// 若没有命中条件边,则取第一条无条件边(无条件 = 无 condition 或 空字符串)
if nexts.is_empty() {
let mut picked = None;
for link in links.iter() {
match &link.condition {
None => { picked = Some(link.to.0.clone()); break; }
Some(s) if s.trim().is_empty() => { picked = Some(link.to.0.clone()); break; }
_ => {}
}
}
if let Some(to) = picked {
info!(target: "udmin.flow", from=%node.id.0, to=%to, "condition fallback: pick unconditional");
nexts.push(to);
} else {
info!(target: "udmin.flow", node=%node.id.0, "condition: no matched and no unconditional, stop");
}
}
} else {
// 非条件节点忽略条件fan-out 所有出边(全部并行执行)
for link in links.iter() {
nexts.push(link.to.0.clone());
info!(target: "udmin.flow", from=%node.id.0, to=%link.to.0, "fan-out from non-condition node");
}
}
}
// 无后继:记录耗时后结束
if nexts.is_empty() {
let duration = node_start.elapsed().as_millis();
{
let mut lg = logs.lock().await;
lg.push(format!("leave node: {} {} ms", node_id_str, duration));
}
push_and_emit(&logs, &opts, &node_id_str, &ctx, format!("leave node: {} {} ms", node_id_str, duration)).await;
info!(target: "udmin.flow", "leave node: {} {} ms", node_id_str, duration);
if let Some(tx) = opts.event_tx.as_ref() {
let node_logs = { let lg = logs.lock().await; lg[pre_len..].to_vec() };
let ctx_snapshot = { ctx.read().await.clone() };
crate::middlewares::sse::emit_node(&tx, node_id_str.clone(), node_logs, ctx_snapshot).await;
}
break;
}
// 单分支:记录耗时后前进
if nexts.len() == 1 {
let duration = node_start.elapsed().as_millis();
{
let mut lg = logs.lock().await;
lg.push(format!("leave node: {} {} ms", node_id_str, duration));
}
info!(target: "udmin.flow", "leave node: {} {} ms", node_id_str, duration);
if let Some(tx) = opts.event_tx.as_ref() {
let node_logs = { let lg = logs.lock().await; lg[pre_len..].to_vec() };
let ctx_snapshot = { ctx.read().await.clone() };
crate::middlewares::sse::emit_node(&tx, node_id_str.clone(), node_logs, ctx_snapshot).await;
}
current = nexts.remove(0);
continue;
}
// 多分支:主分支沿第一条继续,其余分支并行执行并等待完成
let mut futs = Vec::new();
for to_id in nexts.iter().skip(1).cloned() {
let tasks_c = tasks.clone();
let node_map_c = node_map.clone();
let adj_c = adj.clone();
let ctx_c = ctx.clone();
let opts_c = opts.clone();
let logs_c = logs.clone();
futs.push(drive_from(tasks_c, node_map_c, adj_c, to_id, ctx_c, opts_c, logs_c));
}
// 当前分支继续第一条
current = nexts.into_iter().next().unwrap();
// 在一个安全点等待已分支的完成(这里选择在下一轮进入前等待)
let _ = join_all(futs).await;
// 多分支:记录当前节点耗时(包含等待其他分支完成的时间)
let duration = node_start.elapsed().as_millis();
{
let mut lg = logs.lock().await;
lg.push(format!("leave node: {} {} ms", node_id_str, duration));
}
info!(target: "udmin.flow", "leave node: {} {} ms", node_id_str, duration);
if let Some(tx) = opts.event_tx.as_ref() {
let node_logs = { let lg = logs.lock().await; lg[pre_len..].to_vec() };
let ctx_snapshot = { ctx.read().await.clone() };
crate::middlewares::sse::emit_node(&tx, node_id_str.clone(), node_logs, ctx_snapshot).await;
}
}
Ok(())
}
#[derive(Default)]
pub struct FlowEngineBuilder {
tasks: Option<TaskRegistry>,
}
impl FlowEngineBuilder {
pub fn tasks(mut self, reg: TaskRegistry) -> Self { self.tasks = Some(reg); self }
pub fn build(self) -> FlowEngine {
let tasks = self.tasks.unwrap_or_else(|| crate::flow::task::default_registry());
FlowEngine { tasks }
}
}
impl Default for FlowEngine {
fn default() -> Self { Self { tasks: crate::flow::task::default_registry() } }
}
impl std::fmt::Display for DriveError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.message)
}
}
impl std::error::Error for DriveError {}

View File

@ -0,0 +1,217 @@
// third-party
use anyhow::Result;
use serde_json::Value as V;
use tracing::info;
// 业务函数
pub(crate) fn eval_condition_json(ctx: &serde_json::Value, cond: &serde_json::Value) -> Result<bool> {
// 新增:若 cond 为数组,按 AND 语义评估(全部为 true 才为 true
if let Some(arr) = cond.as_array() {
let mut all_true = true;
for (idx, item) in arr.iter().enumerate() {
let ok = eval_condition_json(ctx, item)?;
info!(target = "udmin.flow", index = idx, result = %ok, "condition group item (AND)");
if !ok { all_true = false; }
}
info!(target = "udmin.flow", count = arr.len(), result = %all_true, "condition group evaluated (AND)");
return Ok(all_true);
}
// 支持前端 Condition 组件导出的: { left:{type, content}, operator, right? }
let left = cond.get("left").ok_or_else(|| anyhow::anyhow!("missing left"))?;
let op_raw = cond.get("operator").and_then(|v| v.as_str()).unwrap_or("");
let right_raw = cond.get("right");
// 解析弱等于标记:当右值 schema.extra.weak 为 true 时,对字符串比较采用忽略大小写与首尾空白的弱等于
let weak_eq = right_raw
.and_then(|r| r.get("schema"))
.and_then(|s| s.get("extra"))
.and_then(|e| e.get("weak"))
.and_then(|b| b.as_bool())
.unwrap_or(false);
let lval = resolve_value(ctx, left)?;
let rval = match right_raw { Some(v) => Some(resolve_value(ctx, v)?), None => None };
// 归一化操作符:忽略大小写,替换下划线为空格
let op = op_raw.trim().to_lowercase().replace('_', " ");
// 工具函数
fn to_f64(v: &V) -> Option<f64> {
match v {
V::Number(n) => n.as_f64(),
V::String(s) => s.parse::<f64>().ok(),
_ => None,
}
}
fn is_empty_val(v: &V) -> bool {
match v {
V::Null => true,
V::String(s) => s.trim().is_empty(),
V::Array(a) => a.is_empty(),
V::Object(m) => m.is_empty(),
_ => false,
}
}
fn norm_str(s: &str) -> String { s.trim().to_lowercase() }
fn json_equal(a: &V, b: &V, weak: bool) -> bool {
match (a, b) {
// 数字:做宽松比较(字符串转数字)
(V::Number(_), V::Number(_)) | (V::Number(_), V::String(_)) | (V::String(_), V::Number(_)) => {
match (to_f64(a), to_f64(b)) { (Some(x), Some(y)) => x == y, _ => a == b }
}
// 字符串:若 weak 则忽略大小写与首尾空白
(V::String(sa), V::String(sb)) if weak => norm_str(sa) == norm_str(sb),
_ => a == b,
}
}
fn contains(left: &V, right: &V, weak: bool) -> bool {
match (left, right) {
(V::String(s), V::String(t)) => {
if weak { norm_str(s).contains(&norm_str(t)) } else { s.contains(t) }
}
(V::Array(arr), r) => arr.iter().any(|x| json_equal(x, r, weak)),
(V::Object(map), V::String(key)) => {
if weak { map.keys().any(|k| norm_str(k) == norm_str(key)) } else { map.contains_key(key) }
}
_ => false,
}
}
fn in_op(left: &V, right: &V, weak: bool) -> bool {
match right {
V::Array(arr) => arr.iter().any(|x| json_equal(left, x, weak)),
V::Object(map) => match left { V::String(k) => {
if weak { map.keys().any(|kk| norm_str(kk) == norm_str(k)) } else { map.contains_key(k) }
}, _ => false },
V::String(hay) => match left { V::String(needle) => {
if weak { norm_str(hay).contains(&norm_str(needle)) } else { hay.contains(needle) }
}, _ => false },
_ => false,
}
}
fn bool_like(v: &V) -> bool {
match v {
V::Bool(b) => *b,
V::Null => false,
V::Number(n) => n.as_f64().map(|x| x != 0.0).unwrap_or(false),
V::String(s) => {
let s_l = s.trim().to_lowercase();
if s_l == "true" { true } else if s_l == "false" { false } else { !s_l.is_empty() }
}
V::Array(a) => !a.is_empty(),
V::Object(m) => !m.is_empty(),
}
}
let res = match (op.as_str(), &lval, &rval) {
// 等于 / 不等于(适配所有 JSON 类型;数字按 f64 比较,其他走深度相等)
("equal" | "equals" | "==" | "eq", l, Some(r)) => json_equal(l, r, weak_eq),
("not equal" | "!=" | "not equals" | "neq", l, Some(r)) => !json_equal(l, r, weak_eq),
// 数字比较
("greater than" | ">" | "gt", l, Some(r)) => match (to_f64(l), to_f64(r)) { (Some(a), Some(b)) => a > b, _ => false },
("greater than or equal" | ">=" | "gte" | "ge", l, Some(r)) => match (to_f64(l), to_f64(r)) { (Some(a), Some(b)) => a >= b, _ => false },
("less than" | "<" | "lt", l, Some(r)) => match (to_f64(l), to_f64(r)) { (Some(a), Some(b)) => a < b, _ => false },
("less than or equal" | "<=" | "lte" | "le", l, Some(r)) => match (to_f64(l), to_f64(r)) { (Some(a), Some(b)) => a <= b, _ => false },
// 包含 / 不包含(字符串、数组、对象(键)
("contains", l, Some(r)) => contains(l, r, weak_eq),
("not contains", l, Some(r)) => !contains(l, r, weak_eq),
// 成员关系left in right / not in
("in", l, Some(r)) => in_op(l, r, weak_eq),
("not in" | "nin", l, Some(r)) => !in_op(l, r, weak_eq),
// 为空 / 非空字符串、数组、对象、null
("is empty" | "empty" | "isempty", l, _) => is_empty_val(l),
("is not empty" | "not empty" | "notempty", l, _) => !is_empty_val(l),
// 布尔判断(对各类型进行布尔化)
("is true" | "is true?" | "istrue", l, _) => bool_like(l),
("is false" | "isfalse", l, _) => !bool_like(l),
_ => false,
};
// 记录调试日志,便于定位条件为何未命中
let l_dbg = match &lval { V::String(s) => format!("\"{}\"", s), _ => format!("{}", lval) };
let r_dbg = match &rval { Some(V::String(s)) => format!("\"{}\"", s), Some(v) => format!("{}", v), None => "<none>".to_string() };
info!(target = "udmin.flow", op=%op, weak=%weak_eq, left=%l_dbg, right=%r_dbg, result=%res, "condition eval");
Ok(res)
}
pub(crate) fn resolve_value(ctx: &serde_json::Value, v: &serde_json::Value) -> Result<serde_json::Value> {
let t = v.get("type").and_then(|v| v.as_str()).unwrap_or("");
match t {
"constant" => Ok(v.get("content").cloned().unwrap_or(V::Null)),
"ref" => {
// content: [nodeId, field]
if let Some(arr) = v.get("content").and_then(|v| v.as_array()) {
if arr.len() >= 2 {
if let (Some(node), Some(field)) = (arr[0].as_str(), arr[1].as_str()) {
let val = ctx
.get("nodes")
.and_then(|n| n.get(node))
.and_then(|m| m.get(field))
.cloned()
.or_else(|| ctx.get(field).cloned())
.unwrap_or(V::Null);
return Ok(val);
}
}
}
Ok(V::Null)
}
"expression" => {
let expr = v.get("content").and_then(|x| x.as_str()).unwrap_or("");
if expr.trim().is_empty() { return Ok(V::Null); }
Ok(crate::flow::engine::eval_rhai_expr_json(expr, ctx).unwrap_or_else(|_| V::Null))
}
_ => Ok(V::Null),
}
}
#[cfg(test)]
mod tests {
use super::*;
use serde_json::json;
fn cond_eq_const(left: serde_json::Value, right: serde_json::Value) -> serde_json::Value {
json!({
"left": {"type": "constant", "content": left},
"operator": "eq",
"right": {"type": "constant", "content": right}
})
}
#[test]
fn and_group_all_true() {
let ctx = json!({});
let group = json!([
cond_eq_const(json!(100), json!(100)),
json!({
"left": {"type": "constant", "content": 100},
"operator": ">",
"right": {"type": "constant", "content": 10}
})
]);
let ok = eval_condition_json(&ctx, &group).unwrap();
assert!(ok);
}
#[test]
fn and_group_has_false() {
let ctx = json!({});
let group = json!([
cond_eq_const(json!(100), json!(10)), // false
json!({
"left": {"type": "constant", "content": 100},
"operator": ">",
"right": {"type": "constant", "content": 10}
})
]);
let ok = eval_condition_json(&ctx, &group).unwrap();
assert!(!ok);
}
}

View File

@ -0,0 +1,261 @@
// third-party
use async_trait::async_trait;
use serde_json::{json, Value};
use tracing::info;
// crate
use crate::flow::domain::{NodeDef, NodeId};
use crate::flow::task::Executor;
#[derive(Default)]
pub struct DbTask;
#[async_trait]
impl Executor for DbTask {
async fn execute(&self, node_id: &NodeId, _node: &NodeDef, ctx: &mut Value) -> anyhow::Result<()> {
// 1) 读取 db 配置:仅节点级 db不再回退到全局 ctx.db避免误用项目数据库
let node_id_opt = Some(node_id.0.clone());
let cfg = match (&node_id_opt, ctx.get("nodes")) {
(Some(node_id), Some(nodes)) => nodes.get(&node_id).and_then(|n| n.get("db")).cloned(),
_ => None,
};
let Some(cfg) = cfg else {
info!(target = "udmin.flow", "db task: no config found, skip");
return Ok(());
};
// 3) 解析配置(包含可选连接信息)
let (sql, params, output_key, conn, mode_from_db) = parse_db_config(cfg)?;
// 提前读取结果模式:仅使用 db.output.mode/db.outputMode/db.mode忽略 connection.mode
let result_mode = mode_from_db;
info!(target = "udmin.flow", "db task: exec sql: {}", sql);
// 4) 获取连接:必须显式声明 db.connection禁止回退到项目全局数据库避免安全风险
let db: std::borrow::Cow<'_, crate::db::Db>;
let tmp_conn; // 用于在本作用域内持有临时连接
use sea_orm::{Statement, ConnectionTrait};
let conn_cfg = conn.ok_or_else(|| anyhow::anyhow!("db task: connection config is required (db.connection)"))?;
// 构造 URL 并建立临时连接
let url = extract_connection_url(conn_cfg)?;
use sea_orm::{ConnectOptions, Database};
use std::time::Duration;
let mut opt = ConnectOptions::new(url);
opt.max_connections(20)
.min_connections(1)
.connect_timeout(Duration::from_secs(8))
.idle_timeout(Duration::from_secs(120))
.sqlx_logging(true);
tmp_conn = Database::connect(opt).await?;
db = std::borrow::Cow::Owned(tmp_conn);
// 判定是否为 SELECT简单判断前缀允许前导空白与括号
let is_select = {
let s = sql.trim_start();
let s = s.trim_start_matches('(');
s.to_uppercase().starts_with("SELECT")
};
// 构建参数列表(支持位置和命名两种形式)
let params_vec: Vec<sea_orm::Value> = match params {
None => vec![],
Some(Value::Array(arr)) => arr.into_iter().map(json_to_db_value).collect::<anyhow::Result<_>>()?,
Some(Value::Object(obj)) => {
// 对命名参数对象,保持插入顺序不可控,这里仅将值收集为位置绑定,建议 SQL 使用 `?` 占位
obj.into_iter().map(|(_, v)| json_to_db_value(v)).collect::<anyhow::Result<_>>()?
}
Some(v) => {
// 其它类型:当作单个位置参数
vec![json_to_db_value(v)?]
}
};
let stmt = Statement::from_sql_and_values(db.get_database_backend(), &sql, params_vec);
let result = if is_select {
let rows = db.query_all(stmt).await?;
// 将 QueryResult 转换为 JSON 数组
let mut out = Vec::with_capacity(rows.len());
for row in rows {
let mut obj = serde_json::Map::new();
// 读取列名列表
let cols = row.column_names();
for col_name in cols.iter() {
let key = col_name.to_string();
// 尝试以通用 JSON 值提取优先字符串、数值、布尔、二进制、null
let val = try_get_as_json(&row, &key);
obj.insert(key, val);
}
out.push(Value::Object(obj));
}
// 默认 rows 模式:直接返回数组
match result_mode.as_deref() {
// 返回首行字段对象(无则 Null
Some("fields") | Some("first") => {
if let Some(Value::Object(m)) = out.get(0) { Value::Object(m.clone()) } else { Value::Null }
}
// 默认与显式 rows 都返回数组
_ => Value::Array(out),
}
} else {
let exec = db.execute(stmt).await?;
// 非 SELECT 默认返回受影响行数
match result_mode.as_deref() {
// 如显式要求 rows则返回空数组
Some("rows") => json!([]),
_ => json!(exec.rows_affected()),
}
};
// 5) 写回 ctx并对敏感信息脱敏
let write_key = output_key.unwrap_or_else(|| "db_response".to_string());
if let (Some(node_id), Some(obj)) = (node_id_opt, ctx.as_object_mut()) {
if let Some(nodes) = obj.get_mut("nodes").and_then(|v| v.as_object_mut()) {
if let Some(target) = nodes.get_mut(&node_id).and_then(|v| v.as_object_mut()) {
// 写入结果
target.insert(write_key, result);
// 对密码字段脱敏(保留其它配置不变)
if let Some(dbv) = target.get_mut("db") {
if let Some(dbo) = dbv.as_object_mut() {
if let Some(connv) = dbo.get_mut("connection") {
match connv {
Value::Object(m) => {
if let Some(pw) = m.get_mut("password") {
*pw = Value::String("***".to_string());
}
if let Some(Value::String(url)) = m.get_mut("url") {
*url = "***".to_string();
}
}
Value::String(s) => { *s = "***".to_string(); }
_ => {}
}
}
}
}
return Ok(());
}
}
}
if let Value::Object(map) = ctx { map.insert(write_key, result); }
Ok(())
}
}
fn parse_db_config(cfg: Value) -> anyhow::Result<(String, Option<Value>, Option<String>, Option<Value>, Option<String>)> {
match cfg {
Value::String(sql) => Ok((sql, None, None, None, None)),
Value::Object(mut m) => {
let sql = m
.remove("sql")
.and_then(|v| v.as_str().map(|s| s.to_string()))
.ok_or_else(|| anyhow::anyhow!("db config missing sql"))?;
let params = m.remove("params");
let output_key = m.remove("outputKey").and_then(|v| v.as_str().map(|s| s.to_string()));
// 在移除 connection 前,从 db 层读取可能的输出模式
let mode_from_db = {
// db.output.mode
let from_output = m.get("output").and_then(|v| v.as_object()).and_then(|o| o.get("mode")).and_then(|v| v.as_str()).map(|s| s.to_string());
// db.outputMode 或 db.mode
let from_flat = m.get("outputMode").and_then(|v| v.as_str()).map(|s| s.to_string())
.or_else(|| m.get("mode").and_then(|v| v.as_str()).map(|s| s.to_string()));
from_output.or(from_flat)
};
let conn = m.remove("connection");
// 安全策略:必须显式声明连接,禁止默认落到全局数据库
if conn.is_none() {
return Err(anyhow::anyhow!("db config missing connection (db.connection is required)"));
}
Ok((sql, params, output_key, conn, mode_from_db))
}
_ => Err(anyhow::anyhow!("invalid db config")),
}
}
fn extract_connection_url(cfg: Value) -> anyhow::Result<String> {
match cfg {
Value::String(url) => Ok(url),
Value::Object(mut m) => {
if let Some(url) = m.remove("url").and_then(|v| v.as_str().map(|s| s.to_string())) {
return Ok(url);
}
let driver = m
.remove("driver")
.and_then(|v| v.as_str().map(|s| s.to_string()))
.unwrap_or_else(|| "mysql".to_string());
// sqlite 特殊处理:仅需要 database文件路径或 :memory:
if driver == "sqlite" {
let database = m.remove("database").and_then(|v| v.as_str().map(|s| s.to_string())).ok_or_else(|| anyhow::anyhow!("connection.database is required for sqlite unless url provided"))?;
return Ok(format!("sqlite://{}", database));
}
let host = m.remove("host").and_then(|v| v.as_str().map(|s| s.to_string())).unwrap_or_else(|| "localhost".to_string());
let port = m.remove("port").map(|v| match v { Value::Number(n) => n.to_string(), Value::String(s) => s, _ => String::new() });
let database = m.remove("database").and_then(|v| v.as_str().map(|s| s.to_string())).ok_or_else(|| anyhow::anyhow!("connection.database is required unless url provided"))?;
let username = m.remove("username").and_then(|v| v.as_str().map(|s| s.to_string())).ok_or_else(|| anyhow::anyhow!("connection.username is required unless url provided"))?;
let password = m.remove("password").and_then(|v| v.as_str().map(|s| s.to_string())).unwrap_or_default();
let port_part = port.filter(|s| !s.is_empty()).map(|s| format!(":{}", s)).unwrap_or_default();
let url = format!(
"{}://{}:{}@{}{}{}",
driver,
percent_encoding::utf8_percent_encode(&username, percent_encoding::NON_ALPHANUMERIC),
percent_encoding::utf8_percent_encode(&password, percent_encoding::NON_ALPHANUMERIC),
host,
port_part,
format!("/{}", database)
);
Ok(url)
}
_ => Err(anyhow::anyhow!("invalid connection config")),
}
}
#[allow(dead_code)]
fn get_result_mode_from_conn(conn: &Option<Value>) -> Option<String> {
match conn {
Some(Value::Object(m)) => m.get("mode").and_then(|v| v.as_str()).map(|s| s.to_string()),
_ => None,
}
}
fn json_to_db_value(v: Value) -> anyhow::Result<sea_orm::Value> {
use sea_orm::Value as DbValue;
let dv = match v {
Value::Null => DbValue::String(None),
Value::Bool(b) => DbValue::Bool(Some(b)),
Value::Number(n) => {
if let Some(i) = n.as_i64() { DbValue::BigInt(Some(i)) }
else if let Some(u) = n.as_u64() { DbValue::BigUnsigned(Some(u)) }
else if let Some(f) = n.as_f64() { DbValue::Double(Some(f)) }
else { DbValue::String(None) }
}
Value::String(s) => DbValue::String(Some(Box::new(s))),
Value::Array(arr) => {
// 无通用跨库数组类型:存为 JSON 字符串
let s = serde_json::to_string(&Value::Array(arr))?;
DbValue::String(Some(Box::new(s)))
}
Value::Object(obj) => {
let s = serde_json::to_string(&Value::Object(obj))?;
DbValue::String(Some(Box::new(s)))
}
};
Ok(dv)
}
fn try_get_as_json(row: &sea_orm::QueryResult, col_name: &str) -> Value {
// 该函数在原文件其余部分定义,保持不变
#[allow(unused)]
fn guess_text(bytes: &[u8]) -> Option<String> {
String::from_utf8(bytes.to_vec()).ok()
}
row.try_get::<String>("", col_name)
.map(Value::String)
.or_else(|_| row.try_get::<i64>("", col_name).map(|v| Value::Number(v.into())))
.or_else(|_| row.try_get::<u64>("", col_name).map(|v| Value::Number(v.into())))
.or_else(|_| row.try_get::<f64>("", col_name).map(|v| serde_json::Number::from_f64(v).map(Value::Number).unwrap_or(Value::Null)))
.or_else(|_| row.try_get::<bool>("", col_name).map(Value::Bool))
.or_else(|_| row.try_get::<Vec<u8>>("", col_name).map(|v| guess_text(&v).map(Value::String).unwrap_or(Value::Null)))
.unwrap_or_else(|_| Value::Null)
}

View File

@ -0,0 +1,128 @@
// std
use std::collections::HashMap;
// third-party
use async_trait::async_trait;
use serde_json::{json, Map, Value};
use tracing::info;
// crate
use crate::flow::domain::{NodeDef, NodeId};
use crate::flow::task::Executor;
use crate::middlewares::http_client::{execute_http, HttpClientOptions, HttpRequest};
// 结构体:紧随 use
#[derive(Default)]
pub struct HttpTask;
#[derive(Default, Clone)]
struct HttpOpts {
timeout_ms: Option<u64>,
insecure: bool,
ca_pem: Option<String>,
http1_only: bool,
}
// 业务实现与函数:置于最后
#[async_trait]
impl Executor for HttpTask {
async fn execute(&self, node_id: &NodeId, _node: &NodeDef, ctx: &mut Value) -> anyhow::Result<()> {
// 1) 从 ctx 中提取 http 配置:优先 nodes.<node_id>.http其次全局 http
let node_id_opt = Some(node_id.0.clone());
let cfg = match (&node_id_opt, ctx.get("nodes")) {
(Some(node_id), Some(nodes)) => nodes.get(&node_id).and_then(|n| n.get("http")).cloned(),
_ => None,
}.or_else(|| ctx.get("http").cloned());
let Some(cfg) = cfg else {
info!(target = "udmin.flow", "http task: no config found, skip");
return Ok(());
};
// 3) 解析配置 -> 转换为中间件请求参数
let (method, url, headers, query, body, opts) = parse_http_config(cfg)?;
info!(target = "udmin.flow", "http task: {} {}", method, url);
let req = HttpRequest {
method,
url,
headers,
query,
body,
};
let client_opts = HttpClientOptions {
timeout_ms: opts.timeout_ms,
insecure: opts.insecure,
ca_pem: opts.ca_pem,
http1_only: opts.http1_only,
};
// 4) 调用中间件发送请求
let out = execute_http(req, client_opts).await?;
let status = out.status;
let headers_out = out.headers;
let parsed_body = out.body;
// 5) 将结果写回 ctx
let result = json!({
"status": status,
"headers": headers_out,
"body": parsed_body,
});
// 优先写 nodes.<node_id>.http_response否则写入全局 http_response
if let (Some(node_id), Some(obj)) = (node_id_opt, ctx.as_object_mut()) {
if let Some(nodes) = obj.get_mut("nodes").and_then(|v| v.as_object_mut()) {
if let Some(target) = nodes.get_mut(&node_id).and_then(|v| v.as_object_mut()) {
target.insert("http_response".to_string(), result);
return Ok(());
}
}
}
// 退回:写入全局
if let Value::Object(map) = ctx { map.insert("http_response".to_string(), result); }
Ok(())
}
}
fn parse_http_config(cfg: Value) -> anyhow::Result<(
String,
String,
Option<HashMap<String, String>>,
Option<Map<String, Value>>,
Option<Value>,
HttpOpts,
)> {
// 支持两种配置:
// 1) 字符串:视为 URL方法 GET
// 2) 对象:{ method, url, headers, query, body }
match cfg {
Value::String(url) => Ok(("GET".into(), url, None, None, None, HttpOpts::default())),
Value::Object(mut m) => {
let method = m.remove("method").and_then(|v| v.as_str().map(|s| s.to_uppercase())).unwrap_or_else(|| "GET".into());
let url = m.remove("url").and_then(|v| v.as_str().map(|s| s.to_string()))
.ok_or_else(|| anyhow::anyhow!("http config missing url"))?;
let headers = m.remove("headers").and_then(|v| v.as_object().cloned()).map(|obj| {
obj.into_iter().filter_map(|(k, v)| v.as_str().map(|s| (k, s.to_string()))).collect::<HashMap<String, String>>()
});
let query = m.remove("query").and_then(|v| v.as_object().cloned());
let body = m.remove("body");
// 统一解析超时配置(内联)
let timeout_ms = if let Some(ms) = m.remove("timeout_ms").and_then(|v| v.as_u64()) {
Some(ms)
} else if let Some(Value::Object(mut to)) = m.remove("timeout") {
to.remove("timeout").and_then(|v| v.as_u64())
} else {
None
};
let insecure = m.remove("insecure").and_then(|v| v.as_bool()).unwrap_or(false);
let http1_only = m.remove("http1_only").and_then(|v| v.as_bool()).unwrap_or(false);
let ca_pem = m.remove("ca_pem").and_then(|v| v.as_str().map(|s| s.to_string()));
let opts = HttpOpts { timeout_ms, insecure, ca_pem, http1_only };
Ok((method, url, headers, query, body, opts))
}
_ => Err(anyhow::anyhow!("invalid http config")),
}
}

View File

@ -0,0 +1,7 @@
pub mod http;
pub mod db;
pub mod variable;
pub mod script_rhai;
pub mod script_js;
pub mod script_python;
pub mod condition;

View File

@ -0,0 +1,161 @@
// std
use std::fs;
use std::time::Instant;
// third-party
use async_trait::async_trait;
use serde_json::Value;
use tracing::{debug, info};
// crate
use crate::flow::domain::{NodeDef, NodeId};
use crate::flow::task::Executor;
#[derive(Default)]
pub struct ScriptJsTask;
fn read_node_script_file(ctx: &Value, node_id: &str, lang_key: &str) -> Option<String> {
if let Some(nodes) = ctx.get("nodes").and_then(|v| v.as_object()) {
if let Some(m) = nodes.get(node_id).and_then(|v| v.get("scripts")).and_then(|v| v.as_object()) {
return m.get(lang_key).and_then(|v| v.as_str()).map(|s| s.to_string());
}
}
None
}
fn truncate_str(s: &str, max: usize) -> String {
let s = s.replace('\n', " ").replace('\r', " ");
if s.len() <= max { s } else { format!("{}", &s[..max]) }
}
fn shallow_diff(before: &Value, after: &Value) -> (Vec<String>, Vec<String>, Vec<String>) {
use std::collections::BTreeSet;
let mut added = Vec::new();
let mut removed = Vec::new();
let mut modified = Vec::new();
let (Some(bm), Some(am)) = (before.as_object(), after.as_object()) else {
if before != after { modified.push("<root>".to_string()); }
return (added, removed, modified);
};
let bkeys: BTreeSet<_> = bm.keys().cloned().collect();
let akeys: BTreeSet<_> = am.keys().cloned().collect();
for k in akeys.difference(&bkeys) { added.push((*k).to_string()); }
for k in bkeys.difference(&akeys) { removed.push((*k).to_string()); }
for k in akeys.intersection(&bkeys) {
let key = (*k).to_string();
if bm.get(&key) != am.get(&key) { modified.push(key); }
}
(added, removed, modified)
}
fn exec_js_script(node_id: &NodeId, script: &str, ctx: &mut Value) -> anyhow::Result<()> {
use rquickjs::{Runtime, Context as JsContext, Ctx, FromJs, Value as JsValue};
if script.trim().is_empty() {
info!(target = "udmin.flow", node=%node_id.0, "script_js task: empty script, skip");
return Ok(());
}
let start = Instant::now();
let preview = truncate_str(script, 200);
debug!(target = "udmin.flow", node=%node_id.0, preview=%preview, "script_js task: will execute JavaScript script");
let before_ctx = ctx.clone();
// QuickJS 运行
let rt = Runtime::new()?;
let qctx = JsContext::full(&rt)?;
// 在 JS 中执行并取回 ctx
let res: anyhow::Result<Option<Value>> = qctx.with(|q: Ctx<'_>| {
// 将当前 ctx 作为 JSON 字符串注入到 JS 全局对象
let global = q.globals();
let ctx_json = serde_json::to_string(&before_ctx).unwrap_or_else(|_| "{}".to_string());
global.set("__CTX_JSON", ctx_json)?;
// 包装脚本,确保返回序列化后的 ctx 字符串(用字符串拼接避免 format! 花括号转义问题)
let mut wrapped = String::new();
wrapped.push_str("(function(){ try { var ctx = JSON.parse(globalThis.__CTX_JSON); ");
wrapped.push_str(script);
wrapped.push_str(" ; return (typeof ctx === 'undefined') ? undefined : JSON.stringify(ctx); } catch (e) { throw e; } })()");
// 执行并获取返回值
let v: JsValue = q.eval(wrapped)?;
// 如果结果为 undefined/null则认为未修改
if v.is_null() || v.is_undefined() {
Ok(None)
} else {
// 将返回的 JS 值转换为 Rust String然后解析为 serde_json::Value
let s: String = String::from_js(&q, v)?;
let new_ctx: Value = serde_json::from_str(&s)?;
Ok(Some(new_ctx))
}
});
let dur_ms = start.elapsed().as_millis();
match res {
Ok(Some(new_ctx)) => {
let (added, removed, modified) = shallow_diff(&before_ctx, &new_ctx);
*ctx = new_ctx;
info!(target = "udmin.flow", node=%node_id.0, ms=%dur_ms, added=%added.len(), removed=%removed.len(), modified=%modified.len(), "script_js task: executed and ctx updated");
if !(added.is_empty() && removed.is_empty() && modified.is_empty()) {
debug!(target = "udmin.flow", node=%node_id.0, ?added, ?removed, ?modified, "script_js task: ctx shallow diff");
}
}
Ok(None) => {
info!(target = "udmin.flow", node=%node_id.0, ms=%dur_ms, preview=%preview, "script_js task: script returned no ctx, ctx unchanged");
}
Err(e) => {
info!(target = "udmin.flow", node=%node_id.0, ms=%dur_ms, err=%e.to_string(), preview=%preview, "script_js task: execution failed, ctx unchanged");
}
}
Ok(())
}
fn exec_js_file(node_id: &NodeId, path: &str, ctx: &mut Value) -> anyhow::Result<()> {
let code = match fs::read_to_string(path) {
Ok(s) => s,
Err(e) => {
info!(target = "udmin.flow", node=%node_id.0, err=%e.to_string(), "script_js task: failed to read JS file");
return Ok(());
}
};
if code.trim().is_empty() {
info!(target = "udmin.flow", node=%node_id.0, "script_js task: empty JS file, skip");
return Ok(());
}
exec_js_script(node_id, &code, ctx)
}
#[async_trait]
impl Executor for ScriptJsTask {
async fn execute(&self, node_id: &NodeId, _node: &NodeDef, ctx: &mut Value) -> anyhow::Result<()> {
// 1) 文件脚本优先nodes.<id>.scripts.js -> 执行文件
if let Some(path) = read_node_script_file(ctx, &node_id.0, "js") {
let preview = truncate_str(&path, 120);
debug!(target = "udmin.flow", node=%node_id.0, file=%preview, "script_js task: will execute JS file");
return exec_js_file(node_id, &path, ctx);
}
// 2) inline 脚本(支持 String 或 { script | expr }),优先读取 nodes.<id>,再回退到根级
let cfg: Option<String> = ctx.get("nodes")
.and_then(|nodes| nodes.get(&node_id.0))
.and_then(|n| n.get("script").or_else(|| n.get("expr")))
.and_then(|v| match v {
Value::String(s) => Some(s.clone()),
Value::Object(m) => m.get("script").or_else(|| m.get("expr")).and_then(|x| x.as_str()).map(|s| s.to_string()),
_ => None,
})
.or_else(|| ctx.get("script").and_then(|v| v.as_str()).map(|s| s.to_string()))
.or_else(|| ctx.get("expr").and_then(|v| v.as_str()).map(|s| s.to_string()));
if let Some(script) = cfg {
return exec_js_script(node_id, &script, ctx);
}
info!(target = "udmin.flow", node=%node_id.0, "script_js task: no script found, skip");
Ok(())
}
}

View File

@ -0,0 +1,54 @@
use std::time::Instant;
use async_trait::async_trait;
use serde_json::Value;
use tracing::{debug, info};
use crate::flow::domain::{NodeDef, NodeId};
use crate::flow::task::Executor;
#[derive(Default)]
pub struct ScriptPythonTask;
fn read_node_script_file(ctx: &Value, node_id: &str, lang_key: &str) -> Option<String> {
if let Some(nodes) = ctx.get("nodes").and_then(|v| v.as_object()) {
if let Some(m) = nodes.get(node_id).and_then(|v| v.get("scripts")).and_then(|v| v.as_object()) {
return m.get(lang_key).and_then(|v| v.as_str()).map(|s| s.to_string());
}
}
None
}
fn truncate_str(s: &str, max: usize) -> String {
let s = s.replace('\n', " ").replace('\r', " ");
if s.len() <= max { s } else { format!("{}", &s[..max]) }
}
#[async_trait]
impl Executor for ScriptPythonTask {
async fn execute(&self, node_id: &NodeId, _node: &NodeDef, ctx: &mut Value) -> anyhow::Result<()> {
let start = Instant::now();
// 优先 nodes.<id>.scripts.python 指定的脚本文件路径
if let Some(path) = read_node_script_file(ctx, &node_id.0, "python") {
let preview = truncate_str(&path, 120);
info!(target = "udmin.flow", node=%node_id.0, file=%preview, "script_python task: Python file execution not implemented yet (skipped)");
return Ok(());
}
// 兼容 inline 配置(暂不执行,仅提示)
let inline = ctx.get("script")
.or_else(|| ctx.get("expr"))
.and_then(|v| v.as_str())
.map(|s| s.to_string());
if let Some(code) = inline {
let preview = truncate_str(&code, 200);
debug!(target = "udmin.flow", node=%node_id.0, preview=%preview, "script_python task: inline script provided, but execution not implemented");
let _elapsed = start.elapsed().as_millis();
info!(target = "udmin.flow", node=%node_id.0, "script_python task: Python execution not implemented yet (skipped)");
return Ok(());
}
info!(target = "udmin.flow", node=%node_id.0, "script_python task: no script found, skip");
Ok(())
}
}

View File

@ -0,0 +1,139 @@
use std::fs;
use std::time::Instant;
use async_trait::async_trait;
use serde_json::Value;
use tracing::{debug, info};
use anyhow::anyhow;
use crate::flow::domain::{NodeDef, NodeId};
use crate::flow::engine::eval_rhai_expr_json;
use crate::flow::task::Executor;
#[derive(Default)]
pub struct ScriptRhaiTask;
/// 截断长字符串(去掉换行),用于日志预览
fn truncate_str(s: &str, max: usize) -> String {
let s = s.replace(['\n', '\r'], " ");
if s.len() <= max {
s
} else {
format!("{}", &s[..max])
}
}
/// 对比两个 JSON仅浅层返回 (新增字段, 删除字段, 修改字段)
fn shallow_diff(before: &Value, after: &Value) -> (Vec<String>, Vec<String>, Vec<String>) {
use std::collections::BTreeSet;
let mut added = Vec::new();
let mut removed = Vec::new();
let mut modified = Vec::new();
let (Some(bm), Some(am)) = (before.as_object(), after.as_object()) else {
if before != after {
modified.push("<root>".to_string());
}
return (added, removed, modified);
};
let bkeys: BTreeSet<_> = bm.keys().cloned().collect();
let akeys: BTreeSet<_> = am.keys().cloned().collect();
for k in akeys.difference(&bkeys) {
added.push(k.to_string());
}
for k in bkeys.difference(&akeys) {
removed.push(k.to_string());
}
for k in akeys.intersection(&bkeys) {
if bm.get(k) != am.get(k) {
modified.push(k.to_string());
}
}
(added, removed, modified)
}
/// 核心执行逻辑:运行 Rhai 脚本,返回更新后的 ctx
fn exec_rhai_code(node_id: &NodeId, script: &str, ctx: &mut Value, source: &str) -> anyhow::Result<()> {
if script.trim().is_empty() {
info!(target = "udmin.flow", node=%node_id.0, source, "script_rhai task: empty script, skip");
return Ok(());
}
let start = Instant::now();
let preview = truncate_str(script, 200);
debug!(target = "udmin.flow", node=%node_id.0, source, preview=%preview, "script_rhai task: will execute script");
let before_ctx = ctx.clone();
let wrapped = format!("{{ {} ; ctx }}", script);
match eval_rhai_expr_json(&wrapped, ctx) {
Ok(new_ctx) => {
let dur_ms = start.elapsed().as_millis();
let (added, removed, modified) = shallow_diff(&before_ctx, &new_ctx);
*ctx = new_ctx;
info!(target = "udmin.flow", node=%node_id.0, source, ms=%dur_ms, added=%added.len(), removed=%removed.len(), modified=%modified.len(), "script_rhai task: executed and ctx updated");
if !(added.is_empty() && removed.is_empty() && modified.is_empty()) {
debug!(target = "udmin.flow", node=%node_id.0, source, ?added, ?removed, ?modified, "script_rhai task: ctx shallow diff");
}
Ok(())
}
Err(err) => {
let dur_ms = start.elapsed().as_millis();
info!(target = "udmin.flow", node=%node_id.0, source, ms=%dur_ms, preview=%preview, err=%err.to_string(), "script_rhai task: execution failed, ctx unchanged");
Err(anyhow!("Rhai script execution failed: {}", err))
}
}
}
/// 读取节点配置里的脚本文件路径
fn read_node_script_file(ctx: &Value, node_id: &str) -> Option<String> {
ctx.get("nodes")
.and_then(|v| v.get(node_id))
.and_then(|n| n.get("scripts"))
.and_then(|v| v.get("rhai"))
.and_then(|v| v.as_str())
.map(|s| s.to_string())
}
/// 读取节点配置里的 inline 脚本
fn read_node_inline_script(ctx: &Value, node_id: &str) -> Option<String> {
ctx.get("nodes")
.and_then(|nodes| nodes.get(node_id))
.and_then(|n| n.get("script").or_else(|| n.get("expr")))
.and_then(|v| match v {
Value::String(s) => Some(s.clone()),
Value::Object(m) => m
.get("script")
.or_else(|| m.get("expr"))
.and_then(|x| x.as_str())
.map(|s| s.to_string()),
_ => None,
})
.or_else(|| ctx.get("script").and_then(|v| v.as_str()).map(|s| s.to_string()))
.or_else(|| ctx.get("expr").and_then(|v| v.as_str()).map(|s| s.to_string()))
}
#[async_trait]
impl Executor for ScriptRhaiTask {
async fn execute(&self, node_id: &NodeId, _node: &NodeDef, ctx: &mut Value) -> anyhow::Result<()> {
// 1) 优先执行文件脚本nodes.<id>.scripts.rhai
if let Some(path) = read_node_script_file(ctx, &node_id.0) {
let code = fs::read_to_string(&path).map_err(|e| {
info!(target = "udmin.flow", node=%node_id.0, err=%e.to_string(), path, "script_rhai task: failed to read Rhai file");
anyhow!("failed to read Rhai file: {}", e)
})?;
return exec_rhai_code(node_id, &code, ctx, "file");
}
// 2) 其次执行 inline 脚本(支持 string 或 {script|expr}
if let Some(script) = read_node_inline_script(ctx, &node_id.0) {
return exec_rhai_code(node_id, &script, ctx, "inline");
}
// 3) 没有脚本 → 跳过
info!(target = "udmin.flow", node=%node_id.0, "script_rhai task: no script found, skip");
Ok(())
}
}

View File

@ -0,0 +1,168 @@
// third-party
use async_trait::async_trait;
use serde_json::{Value, json};
use tracing::info;
// crate
use crate::flow::domain::{NodeDef, NodeId};
use crate::flow::engine::eval_rhai_expr_json;
use crate::flow::task::Executor;
#[derive(Default)]
pub struct VariableTask;
fn resolve_assign_value(ctx: &Value, v: &Value) -> Value {
use serde_json::Value as V;
// helper: get by object path
fn get_by_path<'a>(mut cur: &'a V, path: &[&str]) -> Option<&'a V> {
for seg in path {
match cur {
V::Object(map) => {
if let Some(next) = map.get(*seg) { cur = next; } else { return None; }
}
_ => return None,
}
}
Some(cur)
}
let t = v.get("type").and_then(|v| v.as_str()).unwrap_or("");
match t {
"constant" => {
// 支持两种扩展写法:
// 1) 若常量为字符串且形如 ctx[...] / ctx. 开头,则按 Rhai 表达式求值
// 2) 若常量为字符串且形如 ${path.a.b},则等价于 ref: { content: ["path","a","b"] }
if let Some(s) = v.get("content").and_then(|c| c.as_str()) {
let s_trim = s.trim();
// ${a.b.c} -> ref
if s_trim.starts_with("${") && s_trim.ends_with('}') {
let inner = s_trim[2..s_trim.len()-1].trim();
// 解析为路径段,支持 a.b.c 或 a["b"]["c"] 的简化处理
let normalized = inner
.replace('[', ".")
.replace(']', "")
.replace('"', "")
.replace('\'', "");
let parts: Vec<String> = normalized
.trim_matches('.')
.split('.')
.filter(|seg| !seg.is_empty())
.map(|seg| seg.to_string())
.collect();
if !parts.is_empty() {
let ref_json = json!({"type":"ref","content": parts});
return resolve_assign_value(ctx, &ref_json);
}
}
// 仅当以 ctx[ 或 ctx. 前缀时才进行表达式求值;否则保留原字符串常量
if s_trim.starts_with("ctx[") || s_trim.starts_with("ctx.") {
return eval_rhai_expr_json(s_trim, ctx).unwrap_or_else(|_| V::Null);
}
return V::String(s.to_string());
}
v.get("content").cloned().unwrap_or(V::Null)
}
"ref" => {
// frontend IFlowValue ref: content is [nodeId, key1, key2, ...] or [topKey, ...]
let parts: Vec<String> = v
.get("content")
.and_then(|v| v.as_array())
.map(|arr| arr.iter().filter_map(|x| x.as_str().map(|s| s.to_string())).collect())
.unwrap_or_default();
if parts.is_empty() { return V::Null; }
// Prefer nodes.<nodeId>.* if node id is provided
if parts.len() >= 1 {
let node_id = &parts[0];
let rest: Vec<&str> = parts.iter().skip(1).map(|s| s.as_str()).collect();
// 1) direct: nodes.<nodeId>.<rest...>
let mut path_nodes: Vec<&str> = vec!["nodes", node_id.as_str()];
path_nodes.extend(rest.iter().copied());
if let Some(val) = get_by_path(ctx, &path_nodes) { return val.clone(); }
// 2) HTTP shortcut: nodes.<nodeId>.http_response.<rest...> (e.g., [node, "body"])
let mut path_http: Vec<&str> = vec!["nodes", node_id.as_str(), "http_response"];
path_http.extend(rest.iter().copied());
if let Some(val) = get_by_path(ctx, &path_http) { return val.clone(); }
}
// Fallback: interpret as top-level path: ctx[parts[0]][parts[1]]...
let path_top: Vec<&str> = parts.iter().map(|s| s.as_str()).collect();
if let Some(val) = get_by_path(ctx, &path_top) { return val.clone(); }
// Additional fallback: if looks like [nodeId, ...rest] but nodes.* missing, try top-level with rest only
if parts.len() >= 2 {
let rest_only: Vec<&str> = parts.iter().skip(1).map(|s| s.as_str()).collect();
if let Some(val) = get_by_path(ctx, &rest_only) { return val.clone(); }
}
V::Null
}
"expression" => {
let expr = v.get("content").and_then(|x| x.as_str()).unwrap_or("");
if expr.trim().is_empty() { return V::Null; }
eval_rhai_expr_json(expr, ctx).unwrap_or_else(|_| V::Null)
}
_ => {
// fallback: if content exists, treat as constant
v.get("content").cloned().unwrap_or(V::Null)
}
}
}
#[async_trait]
impl Executor for VariableTask {
async fn execute(&self, node_id: &NodeId, _node: &NodeDef, ctx: &mut Value) -> anyhow::Result<()> {
// 读取 variable 配置:仅节点级
let node_id_str = &node_id.0;
let cfg = match ctx.get("nodes") {
Some(nodes) => nodes.get(node_id_str).and_then(|n| n.get("variable")).cloned(),
_ => None,
};
let Some(cfg) = cfg else {
info!(target = "udmin.flow", node=%node_id.0, "variable task: no config found, skip");
return Ok(());
};
// 支持 { assign: [...] } 或直接为数组
let assigns: Vec<Value> = match &cfg {
Value::Array(arr) => arr.clone(),
Value::Object(m) => m.get("assign").and_then(|v| v.as_array()).cloned().unwrap_or_default(),
_ => vec![],
};
if assigns.is_empty() {
info!(target = "udmin.flow", node=%node_id.0, "variable task: empty assign list, skip");
// 移除 variable 节点配置,避免出现在最终 ctx
if let Value::Object(map) = ctx { if let Some(Value::Object(nodes)) = map.get_mut("nodes") { nodes.remove(node_id_str); } }
return Ok(());
}
let mut applied = 0usize;
for item in assigns {
let op = item.get("operator").and_then(|v| v.as_str()).unwrap_or("assign");
let left = item.get("left").and_then(|v| v.as_str()).unwrap_or("");
let right = item.get("right").unwrap_or(&Value::Null);
if left.is_empty() { continue; }
let val = resolve_assign_value(ctx, right);
if let Value::Object(map) = ctx {
let exists = map.contains_key(left);
let do_set = match op {
"declare" => !exists,
_ => true,
};
if do_set {
map.insert(left.to_string(), val);
applied += 1;
}
}
}
// 执行完成后,移除 variable 节点,避免出现在最终 ctx
if let Value::Object(map) = ctx {
if let Some(Value::Object(nodes)) = map.get_mut("nodes") {
nodes.remove(node_id_str);
}
}
info!(target = "udmin.flow", node=%node_id.0, count=%applied, "variable task: assigned variables");
Ok(())
}
}

View File

@ -0,0 +1,219 @@
use async_trait::async_trait;
use chrono::{DateTime, FixedOffset};
use serde_json::Value;
use tokio::sync::mpsc::Sender;
use crate::flow::context::StreamEvent;
use crate::services::flow_run_log_service::{self, CreateRunLogInput};
use crate::db::Db;
/// 流程执行日志处理器抽象接口
#[async_trait]
pub trait FlowLogHandler: Send + Sync {
/// 记录流程开始执行
async fn log_start(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, operator: Option<(i64, String)>) -> anyhow::Result<()>;
/// 记录流程执行失败(仅包含错误信息)
async fn log_error(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, error_msg: &str, operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()>;
/// 记录流程执行失败(包含部分输出与累计日志)
async fn log_error_detail(&self, _flow_id: i64, _flow_code: Option<&str>, _input: &Value, _output: &Value, _logs: &[String], error_msg: &str, _operator: Option<(i64, String)>, _started_at: DateTime<FixedOffset>, _duration_ms: i64) -> anyhow::Result<()> {
// 默认实现:退化为仅错误信息
self.log_error(_flow_id, _flow_code, _input, error_msg, _operator, _started_at, _duration_ms).await
}
/// 记录流程执行成功
async fn log_success(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, output: &Value, logs: &[String], operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()>;
/// 推送节点执行事件仅SSE实现需要
async fn emit_node_event(&self, _node_id: &str, _event_type: &str, _data: &Value) -> anyhow::Result<()> {
// 默认空实现,数据库日志处理器不需要
Ok(())
}
/// 推送完成事件仅SSE实现需要
async fn emit_done(&self, _success: bool, _output: &Value, _logs: &[String]) -> anyhow::Result<()> {
// 默认空实现,数据库日志处理器不需要
Ok(())
}
}
/// 数据库日志处理器
pub struct DatabaseLogHandler {
db: Db,
}
impl DatabaseLogHandler {
pub fn new(db: Db) -> Self {
Self { db }
}
}
#[async_trait]
impl FlowLogHandler for DatabaseLogHandler {
async fn log_start(&self, _flow_id: i64, _flow_code: Option<&str>, _input: &Value, _operator: Option<(i64, String)>) -> anyhow::Result<()> {
// 数据库日志处理器不需要记录开始事件,只在结束时记录
Ok(())
}
async fn log_error(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, error_msg: &str, operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()> {
let (user_id, username) = operator.map(|(u, n)| (Some(u), Some(n))).unwrap_or((None, None));
flow_run_log_service::create(&self.db, CreateRunLogInput {
flow_id,
flow_code: flow_code.map(|s| s.to_string()),
input: Some(serde_json::to_string(input).unwrap_or_default()),
output: None,
ok: false,
logs: Some(error_msg.to_string()),
user_id,
username,
started_at,
duration_ms,
}).await.map_err(|e| anyhow::anyhow!("Failed to create error log: {}", e))?;
Ok(())
}
async fn log_error_detail(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, output: &Value, logs: &[String], error_msg: &str, operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()> {
let (user_id, username) = operator.map(|(u, n)| (Some(u), Some(n))).unwrap_or((None, None));
// 将 error_msg 附加到日志尾部(若最后一条不同),确保日志中有清晰的错误描述且不重复
let mut all_logs = logs.to_vec();
if all_logs.last().map(|s| s != error_msg).unwrap_or(true) {
all_logs.push(error_msg.to_string());
}
flow_run_log_service::create(&self.db, CreateRunLogInput {
flow_id,
flow_code: flow_code.map(|s| s.to_string()),
input: Some(serde_json::to_string(input).unwrap_or_default()),
output: Some(serde_json::to_string(output).unwrap_or_default()),
ok: false,
logs: Some(serde_json::to_string(&all_logs).unwrap_or_default()),
user_id,
username,
started_at,
duration_ms,
}).await.map_err(|e| anyhow::anyhow!("Failed to create error log with details: {}", e))?;
Ok(())
}
async fn log_success(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, output: &Value, logs: &[String], operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()> {
let (user_id, username) = operator.map(|(u, n)| (Some(u), Some(n))).unwrap_or((None, None));
flow_run_log_service::create(&self.db, CreateRunLogInput {
flow_id,
flow_code: flow_code.map(|s| s.to_string()),
input: Some(serde_json::to_string(input).unwrap_or_default()),
output: Some(serde_json::to_string(output).unwrap_or_default()),
ok: true,
logs: Some(serde_json::to_string(logs).unwrap_or_default()),
user_id,
username,
started_at,
duration_ms,
}).await.map_err(|e| anyhow::anyhow!("Failed to create success log: {}", e))?;
Ok(())
}
}
/// SSE日志处理器
pub struct SseLogHandler {
db: Db,
event_tx: Sender<StreamEvent>,
}
impl SseLogHandler {
pub fn new(db: Db, event_tx: Sender<StreamEvent>) -> Self {
Self { db, event_tx }
}
}
#[async_trait]
impl FlowLogHandler for SseLogHandler {
async fn log_start(&self, _flow_id: i64, _flow_code: Option<&str>, _input: &Value, _operator: Option<(i64, String)>) -> anyhow::Result<()> {
// SSE处理器也不需要记录开始事件
Ok(())
}
async fn log_error(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, error_msg: &str, operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()> {
// 先推送SSE错误事件不在此处发送 done交由调用方统一携带 ctx/logs 发送)
crate::middlewares::sse::emit_error(&self.event_tx, error_msg.to_string()).await;
// 然后记录到数据库(仅错误信息)
let (user_id, username) = operator.map(|(u, n)| (Some(u), Some(n))).unwrap_or((None, None));
flow_run_log_service::create(&self.db, CreateRunLogInput {
flow_id,
flow_code: flow_code.map(|s| s.to_string()),
input: Some(serde_json::to_string(input).unwrap_or_default()),
output: None,
ok: false,
logs: Some(error_msg.to_string()),
user_id,
username,
started_at,
duration_ms,
}).await.map_err(|e| anyhow::anyhow!("Failed to create error log: {}", e))?;
Ok(())
}
async fn log_error_detail(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, output: &Value, logs: &[String], error_msg: &str, operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()> {
// 先推送SSE错误事件不在此处发送 done交由调用方统一携带 ctx/logs 发送)
crate::middlewares::sse::emit_error(&self.event_tx, error_msg.to_string()).await;
// 然后记录到数据库(包含部分输出与累计日志),避免重复附加相同错误信息
let (user_id, username) = operator.map(|(u, n)| (Some(u), Some(n))).unwrap_or((None, None));
let mut all_logs = logs.to_vec();
if all_logs.last().map(|s| s != error_msg).unwrap_or(true) {
all_logs.push(error_msg.to_string());
}
flow_run_log_service::create(&self.db, CreateRunLogInput {
flow_id,
flow_code: flow_code.map(|s| s.to_string()),
input: Some(serde_json::to_string(input).unwrap_or_default()),
output: Some(serde_json::to_string(output).unwrap_or_default()),
ok: false,
logs: Some(serde_json::to_string(&all_logs).unwrap_or_default()),
user_id,
username,
started_at,
duration_ms,
}).await.map_err(|e| anyhow::anyhow!("Failed to create error log with details: {}", e))?;
Ok(())
}
async fn log_success(&self, flow_id: i64, flow_code: Option<&str>, input: &Value, output: &Value, logs: &[String], operator: Option<(i64, String)>, started_at: DateTime<FixedOffset>, duration_ms: i64) -> anyhow::Result<()> {
// 先推送SSE完成事件
crate::middlewares::sse::emit_done(&self.event_tx, true, output.clone(), logs.to_vec()).await;
// 然后记录到数据库
let (user_id, username) = operator.map(|(u, n)| (Some(u), Some(n))).unwrap_or((None, None));
flow_run_log_service::create(&self.db, CreateRunLogInput {
flow_id,
flow_code: flow_code.map(|s| s.to_string()),
input: Some(serde_json::to_string(input).unwrap_or_default()),
output: Some(serde_json::to_string(output).unwrap_or_default()),
ok: true,
logs: Some(serde_json::to_string(logs).unwrap_or_default()),
user_id,
username,
started_at,
duration_ms,
}).await.map_err(|e| anyhow::anyhow!("Failed to create success log: {}", e))?;
Ok(())
}
async fn emit_node_event(&self, node_id: &str, event_type: &str, data: &Value) -> anyhow::Result<()> {
// 推送节点事件到SSE
let event = StreamEvent::Node {
node_id: node_id.to_string(),
logs: vec![event_type.to_string()],
ctx: data.clone(),
};
if let Err(_e) = self.event_tx.send(event).await {
// 通道可能已关闭,忽略错误
}
Ok(())
}
async fn emit_done(&self, success: bool, output: &Value, logs: &[String]) -> anyhow::Result<()> {
crate::middlewares::sse::emit_done(&self.event_tx, success, output.clone(), logs.to_vec()).await;
Ok(())
}
}

View File

@ -0,0 +1,95 @@
use serde_json::{json, Value};
mod http;
mod db;
mod variable;
mod script;
/// Trim whitespace and strip wrapping quotes/backticks if present
pub fn sanitize_wrapped(s: &str) -> String {
let mut t = s.trim();
if t.len() >= 2 {
let bytes = t.as_bytes();
let first = bytes[0] as char;
let last = bytes[t.len() - 1] as char;
if (first == '`' && last == '`') || (first == '"' && last == '"') || (first == '\'' && last == '\'') {
t = &t[1..t.len() - 1];
t = t.trim();
// Handle stray trailing backslash left by an attempted escape of the closing quote/backtick
if t.ends_with('\\') {
t = &t[..t.len() - 1];
}
}
}
t.to_string()
}
/// Build ctx supplement from design_json: fill node-scope configs for executors, e.g., nodes.<id>.<executor>
/// This module intentionally keeps executor-specific mapping away from the DSL parser.
pub fn ctx_from_design_json(design: &Value) -> Value {
// Accept both JSON object and stringified JSON
let parsed: Option<Value> = match design {
Value::String(s) => serde_json::from_str::<Value>(s).ok(),
_ => None,
};
let design = parsed.as_ref().unwrap_or(design);
let mut nodes_map = serde_json::Map::new();
if let Some(arr) = design.get("nodes").and_then(|v| v.as_array()) {
for n in arr {
let id = match n.get("id").and_then(|v| v.as_str()) {
Some(s) => s,
None => continue,
};
let node_type = n.get("type").and_then(|v| v.as_str()).unwrap_or("");
let mut node_cfg = serde_json::Map::new();
match node_type {
"http" => {
if let Some(v) = http::extract_http_cfg(n) {
node_cfg.insert("http".into(), v);
}
}
"db" => {
if let Some(v) = db::extract_db_cfg(n) {
node_cfg.insert("db".into(), v);
}
}
"variable" => {
if let Some(v) = variable::extract_variable_cfg(n) {
node_cfg.insert("variable".into(), v);
}
}
"expr" | "script" | "script_rhai" | "script_js" | "script_python" => {
if let Some(v) = script::extract_script_cfg(n) {
// 直接作为 script 字段保存,执行器会同时兼容 script/expr
node_cfg.insert("script".into(), v);
}
// 多脚本文件: data.scripts.{rhai,js,python}
if let Some(v) = script::extract_scripts_cfg(n) {
node_cfg.insert("scripts".into(), v);
}
}
_ => {}
}
// 兼容:非 expr/script 类型,但 data 内包含脚本配置时也提取
if !node_cfg.contains_key("script") {
if let Some(v) = script::extract_script_cfg(n) {
node_cfg.insert("script".into(), v);
}
}
// 兼容:非 expr/script 类型,但 data.scripts.* 也提取
if !node_cfg.contains_key("scripts") {
if let Some(v) = script::extract_scripts_cfg(n) {
node_cfg.insert("scripts".into(), v);
}
}
if !node_cfg.is_empty() { nodes_map.insert(id.to_string(), Value::Object(node_cfg)); }
}
}
json!({ "nodes": Value::Object(nodes_map) })
}

View File

@ -0,0 +1,35 @@
use serde_json::Value;
// Extract db config: sql, params, outputKey, connection from a node
pub fn extract_db_cfg(n: &Value) -> Option<Value> {
let data = n.get("data");
let db_cfg = data.and_then(|d| d.get("db")).and_then(|v| v.as_object())?;
let mut db_obj = serde_json::Map::new();
// sql can be string or object with content
let raw_sql = db_cfg.get("sql");
let sql = match raw_sql {
Some(Value::String(s)) => super::sanitize_wrapped(s),
Some(Value::Object(o)) => o
.get("content")
.and_then(|v| v.as_str())
.map(super::sanitize_wrapped)
.unwrap_or_default(),
_ => String::new(),
};
if !sql.is_empty() {
db_obj.insert("sql".into(), Value::String(sql));
}
if let Some(p) = db_cfg.get("params") {
db_obj.insert("params".into(), p.clone());
}
if let Some(Value::String(k)) = db_cfg.get("outputKey") {
db_obj.insert("outputKey".into(), Value::String(k.clone()));
}
if let Some(conn) = db_cfg.get("connection") {
db_obj.insert("connection".into(), conn.clone());
}
if db_obj.is_empty() { None } else { Some(Value::Object(db_obj)) }
}

View File

@ -0,0 +1,88 @@
use serde_json::Value;
// 从节点中提取 HTTP 配置method、url、headers、query、body
pub fn extract_http_cfg(n: &Value) -> Option<Value> {
let data = n.get("data");
let api = data.and_then(|d| d.get("api"));
let method = api
.and_then(|a| a.get("method"))
.and_then(|v| v.as_str())
.unwrap_or("GET")
.to_string();
let url_val = api.and_then(|a| a.get("url"));
let raw_url = match url_val {
Some(Value::String(s)) => s.clone(),
Some(Value::Object(obj)) => obj
.get("content")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string(),
_ => String::new(),
};
let url = super::sanitize_wrapped(&raw_url);
if url.is_empty() {
return None;
}
let mut http_obj = serde_json::Map::new();
http_obj.insert("method".into(), Value::String(method));
http_obj.insert("url".into(), Value::String(url));
// 可选headers
if let Some(hs) = api.and_then(|a| a.get("headers")).and_then(|v| v.as_object()) {
let mut heads = serde_json::Map::new();
for (k, v) in hs.iter() {
if let Some(s) = v.as_str() {
heads.insert(k.clone(), Value::String(s.to_string()));
}
}
if !heads.is_empty() {
http_obj.insert("headers".into(), Value::Object(heads));
}
}
// 可选query
if let Some(qs) = api.and_then(|a| a.get("query")).and_then(|v| v.as_object()) {
let mut query = serde_json::Map::new();
for (k, v) in qs.iter() {
query.insert(k.clone(), v.clone());
}
if !query.is_empty() {
http_obj.insert("query".into(), Value::Object(query));
}
}
// 可选body
if let Some(body_obj) = data.and_then(|d| d.get("body")).and_then(|v| v.as_object()) {
if let Some(Value::Object(json_body)) = body_obj.get("json") {
http_obj.insert("body".into(), Value::Object(json_body.clone()));
} else if let Some(Value::String(s)) = body_obj.get("content") {
http_obj.insert("body".into(), Value::String(s.clone()));
}
}
// 可选:超时(统一处理:数字或对象)
if let Some(to_val) = data.and_then(|d| d.get("timeout")) {
match to_val {
Value::Number(n) => {
http_obj.insert("timeout_ms".into(), Value::Number(n.clone()));
}
Value::Object(obj) => {
// 只读访问对象中的字段并规范化
let mut t = serde_json::Map::new();
if let Some(ms) = obj.get("timeout").and_then(|v| v.as_u64()) {
t.insert("timeout".into(), Value::Number(serde_json::Number::from(ms)));
}
if let Some(rt) = obj.get("retryTimes").and_then(|v| v.as_u64()) {
t.insert("retryTimes".into(), Value::Number(serde_json::Number::from(rt)));
}
if !t.is_empty() {
http_obj.insert("timeout".into(), Value::Object(t));
}
}
_ => {}
}
}
Some(Value::Object(http_obj))
}

View File

@ -0,0 +1,63 @@
use serde_json::Value;
// Extract single inline script config from a node's design_json: prefer data.script or data.expr
pub fn extract_script_cfg(n: &Value) -> Option<Value> {
let data = n.get("data");
// script may be a plain string, or object with { content: string }
if let Some(Value::String(s)) = data.and_then(|d| d.get("script")) {
let s = super::sanitize_wrapped(s);
if !s.is_empty() { return Some(Value::String(s)); }
}
if let Some(Value::Object(obj)) = data.and_then(|d| d.get("script")) {
if let Some(Value::String(s)) = obj.get("content") {
let s = super::sanitize_wrapped(s);
if !s.is_empty() { return Some(Value::String(s)); }
}
}
// fallback to expr
if let Some(Value::String(s)) = data.and_then(|d| d.get("expr")) {
let s = super::sanitize_wrapped(s);
if !s.is_empty() { return Some(Value::String(s)); }
}
if let Some(Value::Object(obj)) = data.and_then(|d| d.get("expr")) {
if let Some(Value::String(s)) = obj.get("content") {
let s = super::sanitize_wrapped(s);
if !s.is_empty() { return Some(Value::String(s)); }
}
}
None
}
// Extract multi-language scripts config: data.scripts.{rhai,js,python} each can be string (file path) or object { file | path }
pub fn extract_scripts_cfg(n: &Value) -> Option<Value> {
let data = n.get("data");
let scripts_obj = match data.and_then(|d| d.get("scripts")).and_then(|v| v.as_object()) {
Some(m) => m,
None => return None,
};
let mut out = serde_json::Map::new();
for (k, v) in scripts_obj {
let key = k.to_lowercase();
let lang = match key.as_str() {
"rhai" => Some("rhai"),
"js" | "javascript" => Some("js"),
"py" | "python" => Some("python"),
_ => None,
};
if let Some(lang_key) = lang {
let path_opt = match v {
Value::String(s) => Some(super::sanitize_wrapped(s)),
Value::Object(obj) => {
if let Some(Value::String(p)) = obj.get("file").or_else(|| obj.get("path")) {
Some(super::sanitize_wrapped(p))
} else { None }
}
_ => None,
};
if let Some(p) = path_opt { if !p.is_empty() { out.insert(lang_key.to_string(), Value::String(p)); } }
}
}
if out.is_empty() { None } else { Some(Value::Object(out)) }
}

View File

@ -0,0 +1,19 @@
use serde_json::Value;
// Extract variable config: assign list from a node
pub fn extract_variable_cfg(n: &Value) -> Option<Value> {
let data = n.get("data").and_then(|d| d.as_object());
let assigns = data.and_then(|d| d.get("assign")).cloned();
match assigns {
Some(Value::Array(arr)) if !arr.is_empty() => Some(Value::Object(serde_json::Map::from_iter([
("assign".into(), Value::Array(arr))
]))),
Some(v @ Value::Array(_)) => Some(Value::Object(serde_json::Map::from_iter([
("assign".into(), v)
]))),
Some(v @ Value::Object(_)) => Some(Value::Object(serde_json::Map::from_iter([
("assign".into(), Value::Array(vec![v]))
]))),
_ => None,
}
}

8
backend/src/flow/mod.rs Normal file
View File

@ -0,0 +1,8 @@
pub mod domain;
pub mod context;
pub mod task;
pub mod engine;
pub mod dsl;
pub mod executors;
pub mod mappers;
pub mod log_handler;

54
backend/src/flow/task.rs Normal file
View File

@ -0,0 +1,54 @@
use async_trait::async_trait;
use serde_json::Value;
use std::sync::{Arc, RwLock, OnceLock};
use crate::flow::domain::{NodeDef, NodeId};
#[async_trait]
pub trait Executor: Send + Sync {
async fn execute(&self, node_id: &NodeId, node: &NodeDef, ctx: &mut Value) -> anyhow::Result<()>;
}
pub type TaskRegistry = std::collections::HashMap<String, Arc<dyn Executor>>;
pub fn default_registry() -> TaskRegistry {
let mut reg: TaskRegistry = TaskRegistry::new();
reg.insert("http".into(), Arc::new(crate::flow::executors::http::HttpTask::default()));
reg.insert("db".into(), Arc::new(crate::flow::executors::db::DbTask::default()));
// Script executors by language
reg.insert("script_rhai".into(), Arc::new(crate::flow::executors::script_rhai::ScriptRhaiTask::default()));
reg.insert("script".into(), Arc::new(crate::flow::executors::script_rhai::ScriptRhaiTask::default())); // default alias -> Rhai
reg.insert("script_js".into(), Arc::new(crate::flow::executors::script_js::ScriptJsTask::default()));
reg.insert("script_python".into(), Arc::new(crate::flow::executors::script_python::ScriptPythonTask::default()));
// Backward-compatible alias: expr -> script_rhai
reg.insert("expr".into(), Arc::new(crate::flow::executors::script_rhai::ScriptRhaiTask::default()));
// register variable executor
reg.insert("variable".into(), Arc::new(crate::flow::executors::variable::VariableTask::default()));
reg
}
// ===== Global registry (for DI/registry center) =====
static GLOBAL_TASK_REGISTRY: OnceLock<RwLock<TaskRegistry>> = OnceLock::new();
/// Get a snapshot of current registry (clone of HashMap). If not initialized, it will be filled with default_registry().
pub fn get_registry() -> TaskRegistry {
let lock = GLOBAL_TASK_REGISTRY.get_or_init(|| RwLock::new(default_registry()));
lock.read().expect("lock poisoned").clone()
}
/// Register/override a single task into global registry.
pub fn register_global_task(name: impl Into<String>, task: Arc<dyn Executor>) {
let lock = GLOBAL_TASK_REGISTRY.get_or_init(|| RwLock::new(default_registry()));
let mut w = lock.write().expect("lock poisoned");
w.insert(name.into(), task);
}
/// Initialize or mutate the global registry with a custom initializer.
pub fn init_global_registry_with(init: impl FnOnce(&mut TaskRegistry)) {
let lock = GLOBAL_TASK_REGISTRY.get_or_init(|| RwLock::new(default_registry()));
let mut w = lock.write().expect("lock poisoned");
init(&mut w);
}

View File

@ -7,7 +7,7 @@ pub mod models;
pub mod services; pub mod services;
pub mod routes; pub mod routes;
pub mod utils; pub mod utils;
//pub mod workflow; pub mod flow;
use axum::Router; use axum::Router;
use axum::http::{HeaderValue, Method}; use axum::http::{HeaderValue, Method};
@ -15,6 +15,16 @@ use tower_http::cors::{CorsLayer, Any, AllowOrigin};
use migration::MigratorTrait; use migration::MigratorTrait;
use axum::middleware; use axum::middleware;
// 自定义日志时间格式YYYY-MM-DD HH:MM:SS.ssssss不带 T 和 Z
struct LocalTimeFmt;
impl tracing_subscriber::fmt::time::FormatTime for LocalTimeFmt {
fn format_time(&self, w: &mut tracing_subscriber::fmt::format::Writer) -> std::fmt::Result {
let now = chrono::Local::now();
w.write_str(&now.format("%Y-%m-%d %H:%M:%S%.6f").to_string())
}
}
#[tokio::main] #[tokio::main]
async fn main() -> anyhow::Result<()> { async fn main() -> anyhow::Result<()> {
// 增强:支持通过 ENV_FILE 指定要加载的环境文件,并记录实际加载的文件 // 增强:支持通过 ENV_FILE 指定要加载的环境文件,并记录实际加载的文件
@ -41,17 +51,33 @@ async fn main() -> anyhow::Result<()> {
} }
}; };
tracing_subscriber::fmt().with_env_filter(tracing_subscriber::EnvFilter::from_default_env()).init(); tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.with_timer(LocalTimeFmt)
.init();
let db = db::init_db().await?; let db = db::init_db().await?;
// set global DB for tasks
db::set_db(db.clone()).expect("db set failure");
// initialize Redis connection // initialize Redis connection
let redis_pool = redis::init_redis().await?; let redis_pool = redis::init_redis().await?;
redis::set_redis_pool(redis_pool)?; redis::set_redis_pool(redis_pool)?;
// 初始化分布式ID生成器读取 ID_MACHINE_ID / ID_NODE_ID
crate::utils::init_from_env();
// run migrations // run migrations
migration::Migrator::up(&db, None).await.expect("migration up"); migration::Migrator::up(&db, None).await.expect("migration up");
// 初始化并启动调度器仅启动不加载DB
if let Err(e) = crate::utils::init_scheduler().await { tracing::error!(target = "udmin", error = %e, "init scheduler failed"); }
// 由 service 层加载启用任务并注册到调度器
if let Err(e) = services::schedule_job_service::init_load_enabled_and_register(&db).await {
tracing::error!(target = "udmin", error = %e, "init schedule jobs failed");
}
let allow_origins = std::env::var("CORS_ALLOW_ORIGINS").unwrap_or_else(|_| "http://localhost:5173".into()); let allow_origins = std::env::var("CORS_ALLOW_ORIGINS").unwrap_or_else(|_| "http://localhost:5173".into());
let origin_values: Vec<HeaderValue> = allow_origins let origin_values: Vec<HeaderValue> = allow_origins
.split(',') .split(',')
@ -95,7 +121,8 @@ async fn main() -> anyhow::Result<()> {
let app = Router::new() let app = Router::new()
.nest("/api", api) .nest("/api", api)
.layer(cors) .layer(cors)
.layer(middleware::from_fn_with_state(db.clone(), middlewares::logging::request_logger)); .layer(middleware::from_fn_with_state(db.clone(), middlewares::logging::request_logger))
.layer(middleware::from_fn_with_state(db.clone(), middlewares::auth_guard::auth_guard));
// 读取并记录最终使用的主机与端口(默认端口改为 9898 // 读取并记录最终使用的主机与端口(默认端口改为 9898
let app_host = std::env::var("APP_HOST").unwrap_or("0.0.0.0".into()); let app_host = std::env::var("APP_HOST").unwrap_or("0.0.0.0".into());
@ -103,8 +130,18 @@ async fn main() -> anyhow::Result<()> {
if let Some(f) = &env_file_used { tracing::info!("env file loaded: {}", f); } else { tracing::info!("env file loaded: <none>"); } if let Some(f) = &env_file_used { tracing::info!("env file loaded: {}", f); } else { tracing::info!("env file loaded: <none>"); }
tracing::info!("resolved APP_HOST={} APP_PORT={}", app_host, app_port); tracing::info!("resolved APP_HOST={} APP_PORT={}", app_host, app_port);
let addr = format!("{}:{}", app_host, app_port); let http_addr = format!("{}:{}", app_host, app_port);
tracing::info!("listening on {}", addr); tracing::info!("listening on {}", http_addr);
axum::serve(tokio::net::TcpListener::bind(addr).await?, app).await?;
// HTTP 服务监听
let http_listener = tokio::net::TcpListener::bind(http_addr.clone()).await?;
let http_server = axum::serve(http_listener, app);
// WS 服务下沉到中间件
let ws_server = middlewares::ws::serve(db.clone());
// 新增SSE 服务独立端口监听(默认 8866可配 SSE_HOST/SSE_PORT
let sse_server = middlewares::sse::serve(db.clone());
tokio::try_join!(http_server, ws_server, sse_server)?;
Ok(()) Ok(())
} }

View File

@ -0,0 +1,91 @@
//! 全局认证拦截中间件
//!
//! - 功能:对进入 /api 的请求进行统一的登录校验Bearer access token
//! - 支持按路径跳过认证:前缀白名单与精确路径白名单
//! - 失败返回 401消息体统一为 { code: 401, message: "unauthorized" }
use axum::{extract::State, http::{Request, Method, header::AUTHORIZATION}, middleware::Next, response::Response};
use axum::body::Body;
use crate::{db::Db, error::AppError};
/// 路径白名单配置
/// - prefix_whitelist按前缀匹配例如 /api/auth/
/// - exact_whitelist按完整路径匹配例如 /api/auth/login
#[derive(Clone, Debug)]
pub struct AuthGuardConfig {
pub prefix_whitelist: Vec<&'static str>,
pub exact_whitelist: Vec<&'static str>,
}
impl Default for AuthGuardConfig {
fn default() -> Self {
Self {
// 登录/刷新/公开动态接口等路径前缀允许匿名访问
prefix_whitelist: vec![
"/api/auth/",
"/api/dynamic/",
],
// 精确路径白名单:如健康检查等
exact_whitelist: vec![
"/api/auth/login",
"/api/auth/refresh",
],
}
}
}
/// 全局认证拦截中间件
pub async fn auth_guard(
State(_db): State<Db>,
mut req: Request<Body>,
next: Next,
) -> Result<Response, AppError> {
let path = req.uri().path();
// CORS 预检请求直接放行
if req.method() == Method::OPTIONS {
return Ok(next.run(req).await);
}
// 读取白名单配置(后续可支持从环境变量扩展)
let cfg = AuthGuardConfig::default();
let is_whitelisted = cfg.exact_whitelist.iter().any(|&x| x == path)
|| cfg.prefix_whitelist.iter().any(|&p| path.starts_with(p));
if is_whitelisted {
return Ok(next.run(req).await);
}
// 仅拦截 /api 下的受保护接口;其他如 /ws /sse 在各自模块中单独校验
if !path.starts_with("/api/") {
return Ok(next.run(req).await);
}
// 从请求头解析 Authorization: Bearer <token>
let auth = req.headers().get(AUTHORIZATION).ok_or(AppError::Unauthorized)?;
let auth = auth.to_str().map_err(|_| AppError::Unauthorized)?;
let token = auth.strip_prefix("Bearer ").ok_or(AppError::Unauthorized)?;
let secret = std::env::var("JWT_SECRET").map_err(|_| AppError::Unauthorized)?;
// 解析并校验访问令牌
let claims = crate::middlewares::jwt::decode_token(token, &secret)?;
if claims.typ != "access" { return Err(AppError::Unauthorized); }
// 可选Redis 二次校验(与 AuthUser 提取器一致)
let redis_validation_enabled = std::env::var("REDIS_TOKEN_VALIDATION")
.ok()
.and_then(|v| v.parse::<u8>().ok())
.map(|x| x == 1)
.unwrap_or(false);
if redis_validation_enabled {
let is_valid = crate::redis::TokenRedis::validate_access_token(token, claims.uid).await.unwrap_or(false);
if !is_valid { return Err(AppError::Unauthorized); }
}
// 将用户信息注入扩展,供后续处理链使用(可选)
req.extensions_mut().insert(claims);
Ok(next.run(req).await)
}
// 统一将 AppError 转换为响应说明AppError 的 IntoResponse 已在 error.rs 中实现。

View File

@ -0,0 +1,205 @@
use std::collections::HashMap;
use std::time::Duration;
use anyhow::Result;
use reqwest::Certificate;
use serde_json::{Map, Value};
#[derive(Debug, Clone, Default)]
pub struct HttpClientOptions {
pub timeout_ms: Option<u64>,
pub insecure: bool,
pub ca_pem: Option<String>,
pub http1_only: bool,
}
#[derive(Debug, Clone, Default)]
pub struct HttpRequest {
pub method: String,
pub url: String,
pub headers: Option<HashMap<String, String>>, // header values are strings
pub query: Option<Map<String, Value>>, // query values will be stringified
pub body: Option<Value>, // json body
}
#[derive(Debug, Clone)]
pub struct HttpResponse {
pub status: u16,
pub headers: Map<String, Value>,
pub body: Value,
}
pub async fn execute_http(req: HttpRequest, opts: HttpClientOptions) -> Result<HttpResponse> {
// Build client with options
let mut builder = reqwest::Client::builder();
if let Some(ms) = opts.timeout_ms {
builder = builder.timeout(Duration::from_millis(ms));
}
if opts.insecure {
builder = builder.danger_accept_invalid_certs(true);
}
if opts.http1_only {
builder = builder.http1_only();
}
if let Some(pem) = opts.ca_pem {
if let Ok(cert) = Certificate::from_pem(pem.as_bytes()) {
builder = builder.add_root_certificate(cert);
}
}
let client = builder.build()?;
// Build request
// 使用 clone 保留 url 以便在错误提示中引用
let mut rb = client.request(req.method.parse()?, req.url.clone());
// Also set per-request timeout to ensure it takes effect in all cases
if let Some(ms) = opts.timeout_ms {
rb = rb.timeout(Duration::from_millis(ms));
}
if let Some(hs) = req.headers {
use reqwest::header::{HeaderMap, HeaderName, HeaderValue};
let mut map = HeaderMap::new();
for (k, v) in hs {
if let (Ok(name), Ok(value)) = (HeaderName::from_bytes(k.as_bytes()), HeaderValue::from_str(&v)) {
map.insert(name, value);
}
}
rb = rb.headers(map);
}
if let Some(qs) = req.query {
let mut pairs: Vec<(String, String)> = Vec::new();
for (k, v) in qs {
let s = match v {
Value::String(s) => s,
Value::Number(n) => n.to_string(),
Value::Bool(b) => b.to_string(),
other => other.to_string(),
};
pairs.push((k, s));
}
rb = rb.query(&pairs);
}
if let Some(b) = req.body {
rb = rb.json(&b);
}
// 发送请求,明确标注超时错误
let resp = match rb.send().await {
Ok(r) => r,
Err(e) => {
if e.is_timeout() {
let hint = match opts.timeout_ms {
Some(ms) => format!("http timeout: request {} timed out after {}ms", req.url, ms),
None => format!("http timeout: request {} timed out", req.url),
};
return Err(anyhow::Error::new(e).context(hint));
}
return Err(e.into());
}
};
let status = resp.status().as_u16();
let headers_out: Map<String, Value> = resp
.headers()
.iter()
.map(|(k, v)| (k.to_string(), Value::String(v.to_str().unwrap_or("").to_string())))
.collect();
// 读取响应体,明确标注超时错误
let text = match resp.text().await {
Ok(t) => t,
Err(e) => {
if e.is_timeout() {
let hint = match opts.timeout_ms {
Some(ms) => format!("http timeout: reading body from {} timed out after {}ms", req.url, ms),
None => format!("http timeout: reading body from {} timed out", req.url),
};
return Err(anyhow::Error::new(e).context(hint));
}
return Err(e.into());
}
};
let parsed_body: Value = serde_json::from_str(&text).unwrap_or_else(|_| Value::String(text));
Ok(HttpResponse {
status,
headers: headers_out,
body: parsed_body,
})
}
#[cfg(test)]
mod tests {
use super::*;
use wiremock::matchers::{method, path};
use wiremock::{Mock, MockServer, ResponseTemplate};
#[tokio::test]
async fn test_get_success() {
let server = MockServer::start().await;
let body = serde_json::json!({"ok": true});
Mock::given(method("GET"))
.and(path("/hello"))
.respond_with(ResponseTemplate::new(200).set_body_json(body.clone()))
.mount(&server)
.await;
let req = HttpRequest {
method: "GET".into(),
url: format!("{}/hello", server.uri()),
..Default::default()
};
let opts = HttpClientOptions::default();
let resp = execute_http(req, opts).await.unwrap();
assert_eq!(resp.status, 200);
assert_eq!(resp.body, body);
}
#[tokio::test]
async fn test_post_json() {
let server = MockServer::start().await;
let input = serde_json::json!({"name": "udmin"});
Mock::given(method("POST")).and(path("/echo"))
.respond_with(|req: &wiremock::Request| {
// Echo back the request body as JSON
let body = serde_json::from_slice::<Value>(&req.body).unwrap_or(Value::Null);
ResponseTemplate::new(201).set_body_json(body)
})
.mount(&server)
.await;
let req = HttpRequest {
method: "POST".into(),
url: format!("{}/echo", server.uri()),
body: Some(input.clone()),
..Default::default()
};
let opts = HttpClientOptions::default();
let resp = execute_http(req, opts).await.unwrap();
assert_eq!(resp.status, 201);
assert_eq!(resp.body, input);
}
#[tokio::test]
async fn test_timeout() {
let server = MockServer::start().await;
// Delay longer than our timeout
Mock::given(method("GET"))
.and(path("/slow"))
.respond_with(ResponseTemplate::new(200).set_delay(Duration::from_millis(200)))
.mount(&server)
.await;
let req = HttpRequest { method: "GET".into(), url: format!("{}/slow", server.uri()), ..Default::default() };
let opts = HttpClientOptions { timeout_ms: Some(50), ..Default::default() };
let err = execute_http(req, opts).await.unwrap_err();
// Try to verify it's a timeout error from reqwest
let is_timeout = err
.downcast_ref::<reqwest::Error>()
.map(|e| e.is_timeout())
.unwrap_or(false);
assert!(is_timeout, "expected timeout error, got: {err}");
}
}

View File

@ -20,8 +20,10 @@ pub fn encode_token(claims: &Claims, secret: &str) -> Result<String, AppError> {
pub fn decode_token(token: &str, secret: &str) -> Result<Claims, AppError> { pub fn decode_token(token: &str, secret: &str) -> Result<Claims, AppError> {
let key = DecodingKey::from_secret(secret.as_bytes()); let key = DecodingKey::from_secret(secret.as_bytes());
let data = jsonwebtoken::decode::<Claims>(token, &key, &Validation::default())?; match jsonwebtoken::decode::<Claims>(token, &key, &Validation::default()) {
Ok(data.claims) Ok(data) => Ok(data.claims),
Err(_) => Err(AppError::Unauthorized),
}
} }
#[derive(Clone, Debug)] #[derive(Clone, Debug)]

View File

@ -1,2 +1,6 @@
pub mod jwt; pub mod jwt;
pub mod logging; pub mod logging;
pub mod sse;
pub mod http_client;
pub mod ws;
pub mod auth_guard;

View File

@ -0,0 +1,204 @@
use axum::response::sse::{Event, KeepAlive, Sse};
use futures::Stream;
use std::convert::Infallible;
use std::time::Duration;
use tokio_stream::{wrappers::ReceiverStream, StreamExt as _};
// 引入后端流式事件类型
use crate::flow::context::StreamEvent;
// 新增:日志与时间戳
use tracing::info;
use chrono::Utc;
/// 将 mpsc::Receiver<T> 包装为 SSE 响应,其中 T 需实现 Serialize
/// - 自动序列化为 JSON 文本并写入 data: 行
/// - 附带 keep-alive避免长连接超时
pub fn from_mpsc<T>(rx: tokio::sync::mpsc::Receiver<T>) -> Sse<impl Stream<Item = Result<Event, Infallible>>>
where
T: serde::Serialize + Send + 'static,
{
let stream = ReceiverStream::new(rx).map(|evt| {
let payload = serde_json::to_string(&evt).unwrap_or_else(|_| "{}".to_string());
// 关键日志:每次将事件映射为 SSE 帧时记录时间点(代表即将写入响应流)
info!(target: "udmin.sse", ts = %Utc::now().to_rfc3339(), payload_len = payload.len(), "sse send");
Ok::<Event, Infallible>(Event::default().data(payload))
});
Sse::new(stream).keep_alive(KeepAlive::new().interval(Duration::from_secs(10)).text("keep-alive"))
}
/// 统一发送:节点事件
pub async fn emit_node(
tx: &tokio::sync::mpsc::Sender<StreamEvent>,
node_id: impl Into<String>,
logs: Vec<String>,
ctx: serde_json::Value,
) {
let nid = node_id.into();
// 日志:事件入队时间
info!(target: "udmin.sse", kind = "node", node_id = %nid, logs_len = logs.len(), ts = %Utc::now().to_rfc3339(), "enqueue event");
let _ = tx
.send(StreamEvent::Node {
node_id: nid,
logs,
ctx,
})
.await;
}
/// 统一发送:完成事件
pub async fn emit_done(
tx: &tokio::sync::mpsc::Sender<StreamEvent>,
ok: bool,
ctx: serde_json::Value,
logs: Vec<String>,
) {
info!(target: "udmin.sse", kind = "done", ok = ok, logs_len = logs.len(), ts = %Utc::now().to_rfc3339(), "enqueue event");
let _ = tx
.send(StreamEvent::Done { ok, ctx, logs })
.await;
}
/// 统一发送:错误事件
pub async fn emit_error(
tx: &tokio::sync::mpsc::Sender<StreamEvent>,
message: impl Into<String>,
) {
let msg = message.into();
info!(target: "udmin.sse", kind = "error", message = %msg, ts = %Utc::now().to_rfc3339(), "enqueue event");
let _ = tx
.send(StreamEvent::Error {
message: msg,
})
.await;
}
// 追加所需 imports避免重复导入 tracing::info 和 chrono::Utc
use axum::{Router, middleware, routing::post, extract::{State, Path, Query}, Json};
use axum::http::HeaderMap;
use tokio::net::TcpListener;
use std::collections::HashMap;
use crate::db::Db;
use crate::error::AppError;
use crate::middlewares;
use crate::middlewares::jwt::decode_token;
use crate::services::flow_service;
use tower_http::cors::{CorsLayer, Any, AllowOrigin};
use axum::http::{HeaderValue, Method};
// 构建仅包含 SSE 路由与通用日志中间件的 Router参照 ws::build_ws_app
pub fn build_sse_app(db: Db) -> Router {
// 组装 CORS与主服务保持一致的允许来源与头方法
let allow_origins = std::env::var("CORS_ALLOW_ORIGINS").unwrap_or_else(|_| "http://localhost:5173".into());
let origin_values: Vec<HeaderValue> = allow_origins
.split(',')
.filter_map(|s| HeaderValue::from_str(s.trim()).ok())
.collect();
let allowed_methods = [
Method::GET,
Method::POST,
Method::PUT,
Method::PATCH,
Method::DELETE,
Method::OPTIONS,
];
let cors = if origin_values.is_empty() {
CorsLayer::new()
.allow_origin(Any)
.allow_methods(allowed_methods.clone())
.allow_headers([
axum::http::header::ACCEPT,
axum::http::header::CONTENT_TYPE,
axum::http::header::AUTHORIZATION,
])
.allow_credentials(false)
} else {
CorsLayer::new()
.allow_origin(AllowOrigin::list(origin_values))
.allow_methods(allowed_methods)
.allow_headers([
axum::http::header::ACCEPT,
axum::http::header::CONTENT_TYPE,
axum::http::header::AUTHORIZATION,
])
.allow_credentials(true)
};
Router::new()
.nest(
"/api",
Router::new().route("/flows/{id}/run/stream", post(run_sse)),
)
.with_state(db.clone())
.layer(cors)
.layer(middleware::from_fn_with_state(db, middlewares::logging::request_logger))
}
// 启动 SSE 服务,默认端口 8866可通过 SSE_HOST/SSE_PORT 覆盖SSE_HOST 回退 APP_HOST
pub async fn serve(db: Db) -> Result<(), std::io::Error> {
let host = std::env::var("SSE_HOST")
.ok()
.or_else(|| std::env::var("APP_HOST").ok())
.unwrap_or_else(|| "0.0.0.0".into());
let port = std::env::var("SSE_PORT").unwrap_or_else(|_| "8866".into());
let addr = format!("{}:{}", host, port);
tracing::info!("sse listening on {}", addr);
let listener = TcpListener::bind(addr).await?;
axum::serve(listener, build_sse_app(db)).await
}
// SSE 路由处理:参考 WebSocket 的鉴权与调用方式
#[derive(serde::Deserialize)]
struct RunReq { #[serde(default)] input: serde_json::Value }
async fn run_sse(
State(db): State<Db>,
Path(id): Path<i64>,
Query(q): Query<HashMap<String, String>>,
headers: HeaderMap,
Json(req): Json<RunReq>,
) -> Result<axum::response::sse::Sse<impl futures::Stream<Item = Result<axum::response::sse::Event, std::convert::Infallible>>>, AppError> {
use axum::http::header::AUTHORIZATION;
// 1) 认证:优先 Authorization其次查询参数 access_token
let token_opt = headers
.get(AUTHORIZATION)
.and_then(|v| v.to_str().ok())
.and_then(|s| s.strip_prefix("Bearer "))
.map(|s| s.to_string())
.or_else(|| q.get("access_token").cloned());
let token = token_opt.ok_or(AppError::Unauthorized)?;
let secret = std::env::var("JWT_SECRET").map_err(|_| AppError::Unauthorized)?;
let claims = decode_token(&token, &secret)?;
if claims.typ != "access" { return Err(AppError::Unauthorized); }
// 可选Redis 二次校验(与 WS 一致)
let redis_validation_enabled = std::env::var("REDIS_TOKEN_VALIDATION")
.unwrap_or_else(|_| "true".to_string())
.parse::<bool>().unwrap_or(true);
if redis_validation_enabled {
let is_valid = crate::redis::TokenRedis::validate_access_token(&token, claims.uid).await.unwrap_or(false);
if !is_valid { return Err(AppError::Unauthorized); }
}
// 建立 mpsc 通道用于接收引擎的流式事件
let (tx, rx) = tokio::sync::mpsc::channel::<crate::flow::context::StreamEvent>(16);
// 启动后台任务运行流程,将事件通过 tx 发送
let db_clone = db.clone();
let id_clone = id;
let input = req.input.clone();
let user_info = Some((claims.uid, claims.sub));
tokio::spawn(async move {
let _ = flow_service::run_with_stream(db_clone, id_clone, flow_service::RunReq { input }, user_info, tx).await;
});
// 由通用组件把 Receiver 包装为 SSE 响应
Ok(from_mpsc(rx))
}

View File

@ -0,0 +1,141 @@
use axum::{Router, middleware};
use crate::db::Db;
use crate::routes;
use crate::middlewares;
use tokio::net::TcpListener;
use tracing::info;
// 新增:错误类型与鉴权/Redis 校验、Flow 运行
use crate::error::AppError;
use crate::middlewares::jwt::decode_token;
use crate::services::flow_service;
use crate::flow::context::StreamEvent;
// 封装 WS 服务构建:返回仅包含 WS 路由与通用日志中间件的 Router
pub fn build_ws_app(db: Db) -> Router {
Router::new()
.nest("/api", routes::flows::ws_router())
.with_state(db.clone())
.layer(middleware::from_fn_with_state(db, middlewares::logging::request_logger))
}
// 启动 WS 服务,读取 WS_HOST/WS_PORT回退到 APP_HOST/默认端口),并启动监听
pub async fn serve(db: Db) -> Result<(), std::io::Error> {
let host = std::env::var("WS_HOST")
.ok()
.or_else(|| std::env::var("APP_HOST").ok())
.unwrap_or_else(|| "0.0.0.0".into());
let port = std::env::var("WS_PORT").unwrap_or_else(|_| "8855".into());
let addr = format!("{}:{}", host, port);
info!("ws listening on {}", addr);
let listener = TcpListener::bind(addr).await?;
axum::serve(listener, build_ws_app(db)).await
}
// ================= 路由处理:仅从路由层转发调用 =================
use std::collections::HashMap;
use axum::http::{HeaderMap, header::AUTHORIZATION};
use axum::response::Response;
use axum::extract::{State, Path, Query};
use axum::extract::ws::{WebSocketUpgrade, WebSocket, Message, Utf8Bytes};
pub async fn run_ws(
State(db): State<Db>,
Path(id): Path<i64>,
Query(q): Query<HashMap<String, String>>,
headers: HeaderMap,
ws: WebSocketUpgrade,
) -> Result<Response, AppError> {
// 1) 认证:优先 Authorization其次查询参数 access_token
let token_opt = headers
.get(AUTHORIZATION)
.and_then(|v| v.to_str().ok())
.and_then(|s| s.strip_prefix("Bearer "))
.map(|s| s.to_string())
.or_else(|| q.get("access_token").cloned());
let token = token_opt.ok_or(AppError::Unauthorized)?;
let secret = std::env::var("JWT_SECRET").map_err(|_| AppError::Unauthorized)?;
let claims = decode_token(&token, &secret)?;
if claims.typ != "access" { return Err(AppError::Unauthorized); }
// 可选Redis 二次校验(与 AuthUser 提取逻辑一致)
let redis_validation_enabled = std::env::var("REDIS_TOKEN_VALIDATION")
.unwrap_or_else(|_| "true".to_string())
.parse::<bool>().unwrap_or(true);
if redis_validation_enabled {
let is_valid = crate::redis::TokenRedis::validate_access_token(&token, claims.uid).await.unwrap_or(false);
if !is_valid { return Err(AppError::Unauthorized); }
}
let db_clone = db.clone();
let id_clone = id;
let user_info = Some((claims.uid, claims.sub));
Ok(ws.on_upgrade(move |socket| async move {
handle_ws_flow(socket, db_clone, id_clone, user_info).await;
}))
}
pub async fn handle_ws_flow(mut socket: WebSocket, db: Db, id: i64, user_info: Option<(i64, String)>) {
use tokio::time::{timeout, Duration};
use tokio::select;
use tokio::sync::mpsc;
// 读取首条消息作为输入5s 超时),格式:{ "input": any }
let mut input_value = serde_json::Value::Object(serde_json::Map::new());
match timeout(Duration::from_secs(5), socket.recv()).await {
Ok(Some(Ok(Message::Text(s)))) => {
if let Ok(v) = serde_json::from_str::<serde_json::Value>(s.as_str()) {
if let Some(inp) = v.get("input") { input_value = inp.clone(); }
}
}
Ok(Some(Ok(Message::Binary(b)))) => {
if let Ok(s) = std::str::from_utf8(&b) {
if let Ok(v) = serde_json::from_str::<serde_json::Value>(s) {
if let Some(inp) = v.get("input") { input_value = inp.clone(); }
}
}
}
_ => {}
}
// mpsc 管道:引擎事件 -> 此任务
let (tx, mut rx) = mpsc::channel::<StreamEvent>(16);
// 后台运行流程
let db2 = db.clone();
let id2 = id;
tokio::spawn(async move {
let _ = flow_service::run_with_stream(db2, id2, flow_service::RunReq { input: input_value }, user_info, tx).await;
});
// 转发事件到 WebSocket
loop {
select! {
maybe_evt = rx.recv() => {
match maybe_evt {
None => { let _ = socket.send(Message::Close(None)).await; break; }
Some(evt) => {
let json = match evt {
StreamEvent::Node { node_id, logs, ctx } => serde_json::json!({"type":"node","node_id": node_id, "logs": logs, "ctx": ctx}),
StreamEvent::Done { ok, ctx, logs } => serde_json::json!({"type":"done","ok": ok, "ctx": ctx, "logs": logs}),
StreamEvent::Error { message } => serde_json::json!({"type":"error","message": message}),
};
let _ = socket.send(Message::Text(Utf8Bytes::from(json.to_string()))).await;
}
}
}
// 客户端主动关闭或发送其它指令(例如取消)
maybe_msg = socket.recv() => {
match maybe_msg {
None => break,
Some(Ok(Message::Close(_))) => break,
Some(Ok(_other)) => { /* 当前不处理取消等指令 */ }
Some(Err(_)) => break,
}
}
}
}
}

View File

@ -0,0 +1,21 @@
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "flows")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i64,
pub name: Option<String>,
pub yaml: Option<String>,
pub design_json: Option<String>,
// 新增:流程编号与备注
pub code: Option<String>,
pub remark: Option<String>,
pub created_at: DateTimeWithTimeZone,
pub updated_at: DateTimeWithTimeZone,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

View File

@ -0,0 +1,25 @@
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel, serde::Serialize, serde::Deserialize)]
#[sea_orm(table_name = "flow_run_logs")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i64,
pub flow_id: i64,
// 新增:流程编码(可空)
pub flow_code: Option<String>,
pub input: Option<String>,
pub output: Option<String>,
pub ok: bool,
pub logs: Option<String>,
pub user_id: Option<i64>,
pub username: Option<String>,
pub started_at: DateTimeWithTimeZone,
pub duration_ms: i64,
pub created_at: DateTimeWithTimeZone,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

View File

@ -10,3 +10,6 @@ pub mod request_log;
// 新增岗位与用户岗位关联模型 // 新增岗位与用户岗位关联模型
pub mod position; pub mod position;
pub mod user_position; pub mod user_position;
pub mod flow;
pub mod flow_run_log;
pub mod schedule_job;

View File

@ -0,0 +1,19 @@
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel, serde::Serialize, serde::Deserialize)]
#[sea_orm(table_name = "schedule_jobs")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: String,
pub name: String,
pub cron_expr: String,
pub enabled: bool,
pub flow_code: String,
pub created_at: DateTimeWithTimeZone,
pub updated_at: DateTimeWithTimeZone,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

View File

@ -61,6 +61,7 @@ impl RedisHelper {
} }
/// 删除键 /// 删除键
#[allow(dead_code)]
pub async fn del(key: &str) -> Result<(), AppError> { pub async fn del(key: &str) -> Result<(), AppError> {
let mut conn = get_redis()?.clone(); let mut conn = get_redis()?.clone();
let _: i32 = conn.del(key).await let _: i32 = conn.del(key).await
@ -69,6 +70,7 @@ impl RedisHelper {
} }
/// 检查键是否存在 /// 检查键是否存在
#[allow(dead_code)]
pub async fn exists(key: &str) -> Result<bool, AppError> { pub async fn exists(key: &str) -> Result<bool, AppError> {
let mut conn = get_redis()?.clone(); let mut conn = get_redis()?.clone();
let result: bool = conn.exists(key).await let result: bool = conn.exists(key).await
@ -77,6 +79,7 @@ impl RedisHelper {
} }
/// 设置键的过期时间 /// 设置键的过期时间
#[allow(dead_code)]
pub async fn expire(key: &str, seconds: u64) -> Result<(), AppError> { pub async fn expire(key: &str, seconds: u64) -> Result<(), AppError> {
let mut conn = get_redis()?.clone(); let mut conn = get_redis()?.clone();
let _: bool = conn.expire(key, seconds as i64).await let _: bool = conn.expire(key, seconds as i64).await
@ -136,12 +139,14 @@ impl TokenRedis {
} }
/// 删除用户的访问token /// 删除用户的访问token
#[allow(dead_code)]
pub async fn revoke_access_token(user_id: i64) -> Result<(), AppError> { pub async fn revoke_access_token(user_id: i64) -> Result<(), AppError> {
let key = format!("token:access:user:{}", user_id); let key = format!("token:access:user:{}", user_id);
RedisHelper::del(&key).await RedisHelper::del(&key).await
} }
/// 删除用户的刷新token /// 删除用户的刷新token
#[allow(dead_code)]
pub async fn revoke_refresh_token(user_id: i64) -> Result<(), AppError> { pub async fn revoke_refresh_token(user_id: i64) -> Result<(), AppError> {
let key = format!("token:refresh:user:{}", user_id); let key = format!("token:refresh:user:{}", user_id);
RedisHelper::del(&key).await RedisHelper::del(&key).await

View File

@ -0,0 +1,60 @@
use axum::{Router, routing::post, extract::{State, Path}, Json};
use crate::{db::Db, response::ApiResponse, services::flow_service, error::AppError};
use serde_json::Value;
use tracing::{info, error};
pub fn router() -> Router<Db> {
Router::new()
.route("/dynamic/{flow_code}", post(execute_flow))
}
async fn execute_flow(
State(db): State<Db>,
Path(flow_code): Path<String>,
Json(payload): Json<Value>
) -> Result<Json<ApiResponse<Value>>, AppError> {
info!(target = "udmin", "dynamic_api.execute_flow: start flow_code={}", flow_code);
// 1. 通过code查询流程
let flow_doc = match flow_service::get_by_code(&db, &flow_code).await {
Ok(doc) => doc,
Err(e) => {
error!(target = "udmin", error = ?e, "dynamic_api.execute_flow: flow not found flow_code={}", flow_code);
return Err(flow_service::ae(e));
}
};
// 2. 执行流程
let flow_id = flow_doc.id.clone();
info!(target = "udmin", "dynamic_api.execute_flow: found flow id={} for code={}", flow_id, flow_code);
match flow_service::run(&db, flow_id, flow_service::RunReq { input: payload }, Some((0, "接口".to_string()))).await {
Ok(result) => {
info!(target = "udmin", "dynamic_api.execute_flow: execution successful flow_code={}", flow_code);
// 仅返回上下文中的 http_resp / http_response如果不存在则返回空对象 {}
let ctx = result.ctx;
let data = match ctx {
Value::Object(mut map) => {
if let Some(v) = map.remove("http_resp") {
v
} else if let Some(v) = map.remove("http_response") {
v
} else {
Value::Object(serde_json::Map::new())
}
}
_ => Value::Object(serde_json::Map::new()),
};
Ok(Json(ApiResponse::ok(data)))
},
Err(e) => {
error!(target = "udmin", error = ?e, "dynamic_api.execute_flow: execution failed flow_code={}", flow_code);
let mut full = e.to_string();
for cause in e.chain().skip(1) {
full.push_str(" | ");
full.push_str(&cause.to_string());
}
Err(AppError::InternalMsg(full))
}
}
}

View File

@ -0,0 +1,47 @@
//! 流程运行日志路由模块
//!
//! 提供流程运行日志的分页查询接口与批量删除接口。
use axum::{
extract::{Query, State},
routing::{get, delete},
Json, Router,
};
use axum::extract::Path;
use crate::db::Db;
use crate::error::AppError;
use crate::response::ApiResponse;
use crate::services::flow_run_log_service;
/// 路由定义:流程运行日志相关接口
pub fn router() -> Router<Db> {
Router::new()
.route("/flow_run_logs", get(list))
.route("/flow_run_logs/{ids}", delete(delete_many))
}
/// 流程运行日志列表
async fn list(
State(db): State<Db>,
Query(p): Query<flow_run_log_service::ListParams>,
) -> Result<Json<ApiResponse<flow_run_log_service::PageResp<flow_run_log_service::RunLogItem>>>, AppError> {
let res = flow_run_log_service::list(&db, p).await?;
Ok(Json(ApiResponse::ok(res)))
}
/// 批量删除流程运行日志
///
/// 路径参数 ids 采用逗号分隔的 ID 列表,例如:/flow_run_logs/1001,1002
/// 返回:{ deleted: <u64> }deleted 表示实际删除条数
async fn delete_many(
State(db): State<Db>,
Path(ids): Path<String>,
) -> Result<Json<ApiResponse<serde_json::Value>>, AppError> {
let id_list: Vec<i64> = ids
.split(',')
.filter_map(|s| s.trim().parse::<i64>().ok())
.collect();
let deleted = flow_run_log_service::delete_many(&db, id_list).await?;
Ok(Json(ApiResponse::ok(serde_json::json!({ "deleted": deleted }))))
}

145
backend/src/routes/flows.rs Normal file
View File

@ -0,0 +1,145 @@
// axum
use axum::extract::{Path, Query, State, ws::WebSocketUpgrade};
use axum::http::HeaderMap;
use axum::response::Response;
use axum::routing::{get, post};
use axum::{Json, Router};
// std
use std::collections::HashMap;
// third-party
use serde::Deserialize;
use tracing::{error, info};
// crate
use crate::middlewares::jwt::AuthUser;
// 新增:引入通用 SSE 组件
use crate::middlewares::sse;
use crate::{
db::Db,
error::AppError,
response::ApiResponse,
services::{flow_service, log_service},
};
pub fn router() -> Router<Db> {
Router::new()
.route("/flows", post(create).get(list))
.route("/flows/{id}", get(get_one).put(update).delete(remove))
.route("/flows/{id}/run", post(run))
// 新增流式运行SSE端点
.route("/flows/{id}/run/stream", post(run_stream))
// 新增WebSocket 实时输出端点GET 握手)
.route("/flows/{id}/run/ws", get(run_ws))
}
// 新增:仅包含 WS 路由的精简 router便于在单独端口挂载
pub fn ws_router() -> Router<Db> {
Router::new()
.route("/flows/{id}/run/ws", get(run_ws))
}
#[derive(Deserialize)]
struct PageParams { page: Option<u64>, page_size: Option<u64>, keyword: Option<String> }
async fn list(State(db): State<Db>, Query(p): Query<PageParams>) -> Result<Json<ApiResponse<flow_service::PageResp<flow_service::FlowSummary>>>, AppError> {
let page = p.page.unwrap_or(1);
let page_size = p.page_size.unwrap_or(10);
let res = flow_service::list(&db, page, page_size, p.keyword).await.map_err(flow_service::ae)?;
Ok(Json(ApiResponse::ok(res)))
}
#[derive(Deserialize)]
struct CreateReq { yaml: Option<String>, name: Option<String>, design_json: Option<serde_json::Value>, code: Option<String>, remark: Option<String> }
#[derive(Deserialize)]
struct UpdateReq { yaml: Option<String>, design_json: Option<serde_json::Value>, name: Option<String>, code: Option<String>, remark: Option<String> }
async fn create(State(db): State<Db>, user: AuthUser, Json(req): Json<CreateReq>) -> Result<Json<ApiResponse<flow_service::FlowDoc>>, AppError> {
info!(target = "udmin", "routes.flows.create: start");
let res = match flow_service::create(&db, flow_service::FlowCreateReq { yaml: req.yaml, name: req.name, design_json: req.design_json, code: req.code, remark: req.remark }).await {
Ok(r) => { info!(target = "udmin", id = %r.id, "routes.flows.create: ok"); r }
Err(e) => {
error!(target = "udmin", error = ?e, "routes.flows.create: failed");
// 将错误恢复为统一映射,避免对外暴露内部细节
return Err(flow_service::ae(e));
}
};
// 新建成功后,补一条 PUT 请求日志,使“最近修改人”默认为创建人
if let Err(e) = log_service::create(&db, log_service::CreateLogInput {
path: format!("/api/flows/{}", res.id),
method: "PUT".to_string(),
request_params: None,
response_params: None,
status_code: 200,
user_id: Some(user.uid),
username: Some(user.username.clone()),
request_time: chrono::Utc::now().with_timezone(&chrono::FixedOffset::east_opt(0).unwrap()),
duration_ms: 0,
}).await {
error!(target = "udmin", error = ?e, "routes.flows.create: write put log failed");
}
Ok(Json(ApiResponse::ok(res)))
}
async fn update(State(db): State<Db>, Path(id): Path<i64>, Json(req): Json<UpdateReq>) -> Result<Json<ApiResponse<flow_service::FlowDoc>>, AppError> {
let res = flow_service::update(&db, id, flow_service::FlowUpdateReq { yaml: req.yaml, design_json: req.design_json, name: req.name, code: req.code, remark: req.remark }).await.map_err(flow_service::ae)?;
Ok(Json(ApiResponse::ok(res)))
}
async fn get_one(State(db): State<Db>, Path(id): Path<i64>) -> Result<Json<ApiResponse<flow_service::FlowDoc>>, AppError> {
let res = flow_service::get(&db, id).await.map_err(flow_service::ae)?;
Ok(Json(ApiResponse::ok(res)))
}
async fn remove(State(db): State<Db>, Path(id): Path<i64>) -> Result<Json<ApiResponse<serde_json::Value>>, AppError> {
flow_service::delete(&db, id).await.map_err(flow_service::ae)?;
Ok(Json(ApiResponse::ok(serde_json::json!({"deleted": true}))))
}
#[derive(Deserialize)]
struct RunReq { #[serde(default)] input: serde_json::Value }
async fn run(State(db): State<Db>, user: AuthUser, Path(id): Path<i64>, Json(req): Json<RunReq>) -> Result<Json<ApiResponse<flow_service::RunResult>>, AppError> {
match flow_service::run(&db, id, flow_service::RunReq { input: req.input }, Some((user.uid, user.username))).await {
Ok(r) => Ok(Json(ApiResponse::ok(r))),
Err(e) => {
// 同步执行:直接把后端错误详细信息返回给前端
let mut full = e.to_string();
for cause in e.chain().skip(1) { full.push_str(" | "); full.push_str(&cause.to_string()); }
Err(AppError::InternalMsg(full))
}
}
}
// 新增SSE 流式运行端点,请求体沿用 RunReq只包含 input
async fn run_stream(State(db): State<Db>, user: AuthUser, Path(id): Path<i64>, Json(req): Json<RunReq>) -> Result<axum::response::sse::Sse<impl futures::Stream<Item = Result<axum::response::sse::Event, std::convert::Infallible>>>, AppError> {
// 建立 mpsc 通道用于接收引擎的流式事件
let (tx, rx) = tokio::sync::mpsc::channel::<crate::flow::context::StreamEvent>(16);
// 启动后台任务运行流程,将事件通过 tx 发送
let db_clone = db.clone();
let id_clone = id;
let input = req.input.clone();
let user_info = Some((user.uid, user.username));
tokio::spawn(async move {
// 复用 flow_service::run 内部大部分逻辑,但通过 DriveOptions 注入 event_tx
let _ = flow_service::run_with_stream(db_clone, id_clone, flow_service::RunReq { input }, user_info, tx).await;
});
// 由通用组件把 Receiver 包装为 SSE 响应
Ok(sse::from_mpsc(rx))
}
// ================= WebSocket 模式:路由仅做转发 =================
async fn run_ws(
State(db): State<Db>,
Path(id): Path<i64>,
Query(q): Query<HashMap<String, String>>,
headers: HeaderMap,
ws: WebSocketUpgrade,
) -> Result<Response, AppError> {
crate::middlewares::ws::run_ws(State(db), Path(id), Query(q), headers, ws).await
}

View File

@ -1,9 +1,47 @@
use axum::{Router, routing::get, extract::{Query, State}, Json}; //! 请求日志查询路由模块
use crate::{db::Db, response::ApiResponse, services::log_service, error::AppError}; //!
//! 提供系统请求日志的分页查询接口与批量删除接口。
pub fn router() -> Router<Db> { Router::new().route("/logs", get(list)) } use axum::{
extract::{Query, State},
routing::{get, delete},
Json, Router,
};
use axum::extract::Path;
async fn list(State(db): State<Db>, Query(p): Query<log_service::ListParams>) -> Result<Json<ApiResponse<log_service::PageResp<log_service::LogInfo>>>, AppError> { use crate::db::Db;
let res = log_service::list(&db, p).await.map_err(|e| AppError::Anyhow(anyhow::anyhow!(e)))?; use crate::error::AppError;
use crate::response::ApiResponse;
use crate::services::log_service;
/// 路由定义:日志相关接口
pub fn router() -> Router<Db> {
Router::new()
.route("/logs", get(list))
.route("/logs/{ids}", delete(delete_many))
}
/// 日志列表
async fn list(
State(db): State<Db>,
Query(p): Query<log_service::ListParams>,
) -> Result<Json<ApiResponse<log_service::PageResp<log_service::LogInfo>>>, AppError> {
let res = log_service::list(&db, p).await?;
Ok(Json(ApiResponse::ok(res))) Ok(Json(ApiResponse::ok(res)))
} }
/// 批量删除系统请求日志
///
/// 路径参数 ids 采用逗号分隔的 ID 列表,例如:/logs/1,2,3
/// 返回:{ deleted: <u64> }deleted 表示实际删除条数
async fn delete_many(
State(db): State<Db>,
Path(ids): Path<String>,
) -> Result<Json<ApiResponse<serde_json::Value>>, AppError> {
let id_list: Vec<i64> = ids
.split(',')
.filter_map(|s| s.trim().parse::<i64>().ok())
.collect();
let deleted = log_service::delete_many(&db, id_list).await?;
Ok(Json(ApiResponse::ok(serde_json::json!({ "deleted": deleted }))))
}

View File

@ -1,11 +1,14 @@
pub mod auth; pub mod auth;
pub mod users; pub mod users;
pub mod roles; pub mod roles;
pub mod menus;
pub mod departments; pub mod departments;
pub mod logs;
// 新增岗位
pub mod positions; pub mod positions;
pub mod menus;
pub mod logs;
pub mod flows;
pub mod flow_run_logs;
pub mod dynamic_api;
pub mod schedule_jobs;
use axum::Router; use axum::Router;
use crate::db::Db; use crate::db::Db;
@ -18,6 +21,9 @@ pub fn api_router() -> Router<Db> {
.merge(menus::router()) .merge(menus::router())
.merge(departments::router()) .merge(departments::router())
.merge(logs::router()) .merge(logs::router())
// 合并岗位路由 .merge(flows::router())
.merge(positions::router()) .merge(positions::router())
.merge(flow_run_logs::router())
.merge(dynamic_api::router())
.merge(schedule_jobs::router())
} }

View File

@ -0,0 +1,76 @@
use axum::{Router, routing::{get, post, put}, extract::{State, Path, Query}, Json};
use crate::{db::Db, error::AppError, response::ApiResponse, services::{schedule_job_service, flow_service}, models::schedule_job};
use crate::middlewares::jwt::AuthUser;
use sea_orm::{EntityTrait};
pub fn router() -> Router<Db> {
Router::new()
.route("/schedule_jobs", get(list).post(create))
.route("/schedule_jobs/{id}", put(update).delete(remove))
// 新增:独立启停端点
.route("/schedule_jobs/{id}/enable", post(enable))
.route("/schedule_jobs/{id}/disable", post(disable))
// 新增:手动执行端点
.route("/schedule_jobs/{id}/execute", post(execute))
}
async fn list(State(db): State<Db>, Query(p): Query<schedule_job_service::ListParams>) -> Result<Json<ApiResponse<schedule_job_service::PageResp<schedule_job_service::ScheduleJobDoc>>>, AppError> {
let res = schedule_job_service::list(&db, p).await?;
Ok(Json(ApiResponse::ok(res)))
}
#[derive(serde::Deserialize)]
struct CreateReq { name: String, cron_expr: String, enabled: bool, flow_code: String }
async fn create(State(db): State<Db>, _user: AuthUser, Json(req): Json<CreateReq>) -> Result<Json<ApiResponse<schedule_job_service::ScheduleJobDoc>>, AppError> {
let res = schedule_job_service::create(&db, schedule_job_service::CreateReq { name: req.name, cron_expr: req.cron_expr, enabled: req.enabled, flow_code: req.flow_code }).await?;
Ok(Json(ApiResponse::ok(res)))
}
#[derive(serde::Deserialize)]
struct UpdateReq { name: Option<String>, cron_expr: Option<String>, enabled: Option<bool>, flow_code: Option<String> }
async fn update(State(db): State<Db>, _user: AuthUser, Path(id): Path<String>, Json(req): Json<UpdateReq>) -> Result<Json<ApiResponse<schedule_job_service::ScheduleJobDoc>>, AppError> {
let res = schedule_job_service::update(&db, &id, schedule_job_service::UpdateReq { name: req.name, cron_expr: req.cron_expr, enabled: req.enabled, flow_code: req.flow_code }).await?;
Ok(Json(ApiResponse::ok(res)))
}
async fn remove(State(db): State<Db>, _user: AuthUser, Path(id): Path<String>) -> Result<Json<ApiResponse<serde_json::Value>>, AppError> {
schedule_job_service::remove(&db, &id).await?;
Ok(Json(ApiResponse::ok(serde_json::json!({}))))
}
// 新增:启用指定任务(不需要请求体)
async fn enable(State(db): State<Db>, _user: AuthUser, Path(id): Path<String>) -> Result<Json<ApiResponse<schedule_job_service::ScheduleJobDoc>>, AppError> {
let res = schedule_job_service::update(&db, &id, schedule_job_service::UpdateReq { name: None, cron_expr: None, enabled: Some(true), flow_code: None }).await?;
Ok(Json(ApiResponse::ok(res)))
}
// 新增:禁用指定任务(不需要请求体)
async fn disable(State(db): State<Db>, _user: AuthUser, Path(id): Path<String>) -> Result<Json<ApiResponse<schedule_job_service::ScheduleJobDoc>>, AppError> {
let res = schedule_job_service::update(&db, &id, schedule_job_service::UpdateReq { name: None, cron_expr: None, enabled: Some(false), flow_code: None }).await?;
Ok(Json(ApiResponse::ok(res)))
}
// 新增:手动执行指定任务
async fn execute(State(db): State<Db>, user: AuthUser, Path(id): Path<String>) -> Result<Json<ApiResponse<serde_json::Value>>, AppError> {
// 1) 获取任务信息
let job = schedule_job::Entity::find_by_id(id.to_string())
.one(&db)
.await?
.ok_or(AppError::NotFound)?;
// 2) 通过 flow_code 获取流程
let flow_doc = flow_service::get_by_code(&db, &job.flow_code).await
.map_err(flow_service::ae)?;
// 3) 执行流程(使用空输入,操作者为当前用户)
let result = flow_service::run(
&db,
flow_doc.id,
flow_service::RunReq { input: serde_json::json!({}) },
Some((user.uid, user.username)),
).await.map_err(flow_service::ae)?;
Ok(Json(ApiResponse::ok(serde_json::to_value(result).map_err(|e| AppError::BadRequest(e.to_string()))?)))
}

View File

@ -0,0 +1,141 @@
use crate::{db::Db, models::flow_run_log};
use sea_orm::{ActiveModelTrait, Set, EntityTrait, PaginatorTrait, QueryFilter, QueryOrder, ColumnTrait};
use sea_orm::QuerySelect;
use chrono::{DateTime, FixedOffset, Utc};
#[derive(serde::Serialize)]
pub struct PageResp<T> { pub items: Vec<T>, pub total: u64, pub page: u64, pub page_size: u64 }
#[derive(serde::Deserialize)]
pub struct ListParams { pub page: Option<u64>, pub page_size: Option<u64>, pub flow_id: Option<i64>, pub flow_code: Option<String>, pub user: Option<String>, pub ok: Option<bool> }
#[derive(serde::Serialize)]
pub struct RunLogItem {
pub id: i64,
pub flow_id: i64,
pub flow_code: Option<String>,
pub input: Option<String>,
pub output: Option<String>,
pub ok: bool,
pub logs: Option<String>,
pub user_id: Option<i64>,
pub username: Option<String>,
pub started_at: chrono::DateTime<chrono::FixedOffset>,
pub duration_ms: i64,
}
impl From<flow_run_log::Model> for RunLogItem {
fn from(m: flow_run_log::Model) -> Self {
Self { id: m.id, flow_id: m.flow_id, flow_code: m.flow_code, input: m.input, output: m.output, ok: m.ok, logs: m.logs, user_id: m.user_id, username: m.username, started_at: m.started_at, duration_ms: m.duration_ms }
}
}
#[derive(serde::Serialize, serde::Deserialize, Clone, Debug)]
pub struct CreateRunLogInput {
pub flow_id: i64,
pub flow_code: Option<String>,
pub input: Option<String>,
pub output: Option<String>,
pub ok: bool,
pub logs: Option<String>,
pub user_id: Option<i64>,
pub username: Option<String>,
pub started_at: DateTime<FixedOffset>,
pub duration_ms: i64,
}
pub async fn create(db: &Db, input: CreateRunLogInput) -> anyhow::Result<i64> {
let am = flow_run_log::ActiveModel {
id: Set(crate::utils::generate_flow_run_log_id()),
flow_id: Set(input.flow_id),
flow_code: Set(input.flow_code),
input: Set(input.input),
output: Set(input.output),
ok: Set(input.ok),
logs: Set(input.logs),
user_id: Set(input.user_id),
username: Set(input.username),
started_at: Set(input.started_at),
duration_ms: Set(input.duration_ms),
created_at: Set(Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap())),
};
let m = am.insert(db).await?;
Ok(m.id)
}
/// 分页查询流程运行日志
///
/// # 参数
/// * `db` - 数据库连接
/// * `p` - 查询参数包含页码、每页大小、流程ID、流程代码、用户名、运行状态等过滤条件
///
/// # 返回值
/// * `Ok(PageResp<RunLogItem>)` - 分页查询结果,包含日志列表和分页信息
/// * `Err(anyhow::Error)` - 查询失败的错误信息
pub async fn list(db: &Db, p: ListParams) -> anyhow::Result<PageResp<RunLogItem>> {
let page = p.page.unwrap_or(1).max(1);
let page_size = p.page_size.unwrap_or(10).max(1);
// 公用查询条件
let mut base_selector = flow_run_log::Entity::find();
if let Some(fid) = p.flow_id {
base_selector = base_selector.filter(flow_run_log::Column::FlowId.eq(fid));
}
if let Some(fcode) = p.flow_code.as_ref() {
base_selector = base_selector.filter(flow_run_log::Column::FlowCode.eq(fcode.clone()));
}
if let Some(u) = p.user.as_ref() {
let like = format!("%{}%", u.replace('%', "\\%").replace('_', "\\_"));
base_selector = base_selector.filter(flow_run_log::Column::Username.like(like));
}
if let Some(ok) = p.ok {
base_selector = base_selector.filter(flow_run_log::Column::Ok.eq(ok));
}
// 总数只查一次,保持和老接口兼容
let total = base_selector.clone().count(db).await? as u64;
// 计算偏移量
let offset = (page - 1) * page_size;
// ⭐ 关键优化:判断前半区 / 后半区
let models = if offset > total / 2 {
// ---- 末页优化:用升序 + limit + reverse ----
let start_idx = total.saturating_sub(page * page_size);
let asc_selector = base_selector.clone();
let mut rows = asc_selector
.order_by_asc(flow_run_log::Column::StartedAt)
.order_by_asc(flow_run_log::Column::Id)
.offset(start_idx)
.limit(page_size)
.all(db)
.await?;
rows.reverse();
rows
} else {
// ---- 常规分页 ----
base_selector
.order_by_desc(flow_run_log::Column::StartedAt)
.order_by_desc(flow_run_log::Column::Id)
.offset(offset)
.limit(page_size)
.all(db)
.await?
};
Ok(PageResp {
items: models.into_iter().map(Into::into).collect(),
total,
page,
page_size,
})
}
// 新增:批量删除流程运行日志
pub async fn delete_many(db: &Db, ids: Vec<i64>) -> anyhow::Result<u64> {
if ids.is_empty() { return Ok(0); }
let res = flow_run_log::Entity::delete_many()
.filter(flow_run_log::Column::Id.is_in(ids))
.exec(db)
.await?;
Ok(res.rows_affected as u64)
}

View File

@ -0,0 +1,451 @@
use anyhow::Context as _;
use serde::{Deserialize, Serialize};
use crate::error::AppError;
use crate::flow::{self, dsl::FlowDSL, engine::FlowEngine, context::{DriveOptions, ExecutionMode, StreamEvent}, log_handler::{FlowLogHandler, DatabaseLogHandler, SseLogHandler}};
use crate::db::Db;
use crate::models::flow as db_flow;
use crate::models::request_log; // 新增:查询最近修改人
use sea_orm::{EntityTrait, ActiveModelTrait, Set, DbErr, ColumnTrait, QueryFilter, PaginatorTrait, QueryOrder};
use sea_orm::entity::prelude::DateTimeWithTimeZone; // 新增:时间类型
use chrono::{Utc, FixedOffset};
use tracing::{info, error};
// 新增:用于流式事件通道
use tokio::sync::mpsc::Sender;
// 新增:用于错误下传递局部上下文与日志
use crate::flow::engine::DriveError;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FlowSummary {
pub id: i64,
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")] pub code: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] pub remark: Option<String>,
pub created_at: DateTimeWithTimeZone,
pub updated_at: DateTimeWithTimeZone,
#[serde(skip_serializing_if = "Option::is_none")] pub last_modified_by: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FlowDoc {
pub id: i64,
pub yaml: String,
#[serde(skip_serializing_if = "Option::is_none")] pub design_json: Option<serde_json::Value>,
#[serde(skip_serializing_if = "Option::is_none")] pub name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] pub code: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] pub remark: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FlowCreateReq { pub yaml: Option<String>, pub name: Option<String>, pub design_json: Option<serde_json::Value>, pub code: Option<String>, pub remark: Option<String> }
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FlowUpdateReq { pub yaml: Option<String>, pub design_json: Option<serde_json::Value>, pub name: Option<String>, pub code: Option<String>, pub remark: Option<String> }
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RunReq { #[serde(default)] pub input: serde_json::Value }
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RunResult { pub ok: bool, pub ctx: serde_json::Value, pub logs: Vec<String> }
#[derive(Clone, Debug, serde::Serialize)]
pub struct PageResp<T> { pub items: Vec<T>, pub total: u64, pub page: u64, pub page_size: u64 }
// list flows from database with pagination & keyword
pub async fn list(db: &Db, page: u64, page_size: u64, keyword: Option<String>) -> anyhow::Result<PageResp<FlowSummary>> {
let mut selector = db_flow::Entity::find();
if let Some(k) = keyword.filter(|s| !s.is_empty()) {
let like = format!("%{}%", k);
// 名称模糊匹配 + 若关键字可解析为数字则按ID精确匹配
selector = selector.filter(
db_flow::Column::Name.like(like.clone())
.or(
match k.parse::<i64>() {
Ok(num) => db_flow::Column::Id.eq(num),
Err(_) => db_flow::Column::Name.like(like),
}
)
);
}
let paginator = selector.order_by_desc(db_flow::Column::CreatedAt).paginate(db, page_size);
let total = paginator.num_items().await? as u64;
let models = paginator.fetch_page(if page > 0 { page - 1 } else { 0 }).await?;
let mut items: Vec<FlowSummary> = Vec::with_capacity(models.len());
for row in models.into_iter() {
let id = row.id;
let name = row
.name
.clone()
.or_else(|| row.yaml.as_deref().and_then(extract_name))
.unwrap_or_else(|| {
let prefix: String = id.to_string().chars().take(8).collect();
format!("flow_{}", prefix)
});
// 最近修改人从请求日志中查找最近一次对该flow的PUT请求
let last_modified_by = request_log::Entity::find()
.filter(request_log::Column::Path.like(format!("/api/flows/{}%", id)))
.filter(request_log::Column::Method.eq("PUT"))
.order_by_desc(request_log::Column::RequestTime)
.one(db)
.await?
.and_then(|m| m.username);
items.push(FlowSummary {
id,
name,
code: row.code.clone(),
remark: row.remark.clone(),
created_at: row.created_at,
updated_at: row.updated_at,
last_modified_by,
});
}
Ok(PageResp { items, total, page, page_size })
}
// create new flow with yaml or just name
pub async fn create(db: &Db, req: FlowCreateReq) -> anyhow::Result<FlowDoc> {
info!(target: "udmin", "flow.create: start");
if let Some(yaml) = &req.yaml {
let _parsed: FlowDSL = serde_yaml::from_str(yaml).context("invalid flow yaml")?;
info!(target: "udmin", "flow.create: yaml parsed ok");
}
let id: i64 = crate::utils::generate_id();
let name = req
.name
.clone()
.or_else(|| req.yaml.as_deref().and_then(extract_name));
let now = Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap());
let design_json_str = match &req.design_json { Some(v) => serde_json::to_string(v).ok(), None => None };
// 克隆一份用于返回
let ret_name = name.clone();
let ret_code = req.code.clone();
let ret_remark = req.remark.clone();
let am = db_flow::ActiveModel {
id: Set(id),
name: Set(name.clone()),
yaml: Set(req.yaml.clone()),
design_json: Set(design_json_str),
// 新增: code 与 remark 入库
code: Set(req.code.clone()),
remark: Set(req.remark.clone()),
created_at: Set(now),
updated_at: Set(Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap())),
..Default::default()
};
info!(target: "udmin", "flow.create: inserting into db id={}", id);
match db_flow::Entity::insert(am).exec(db).await {
Ok(_) => {
info!(target: "udmin", "flow.create: insert ok id={}", id);
Ok(FlowDoc { id, yaml: req.yaml.unwrap_or_default(), design_json: req.design_json, name: ret_name, code: ret_code, remark: ret_remark })
}
Err(DbErr::RecordNotInserted) => {
error!(target: "udmin", "flow.create: insert returned RecordNotInserted, verifying by select id={}", id);
match db_flow::Entity::find_by_id(id).one(db).await {
Ok(Some(_)) => {
info!(target: "udmin", "flow.create: found inserted row by id={}, treating as success", id);
Ok(FlowDoc { id, yaml: req.yaml.unwrap_or_default(), design_json: req.design_json, name, code: req.code, remark: req.remark })
}
Ok(None) => Err(anyhow::anyhow!("insert flow failed").context("verify inserted row not found")),
Err(e) => Err(anyhow::Error::new(e).context("insert flow failed")),
}
}
Err(e) => {
error!(target: "udmin", error = ?e, "flow.create: insert failed");
Err(anyhow::Error::new(e).context("insert flow failed"))
}
}
}
pub async fn get(db: &Db, id: i64) -> anyhow::Result<FlowDoc> {
let row = db_flow::Entity::find_by_id(id).one(db).await?;
let row = row.ok_or_else(|| anyhow::anyhow!("not found"))?;
let yaml = row.yaml.unwrap_or_default();
let design_json = row.design_json.and_then(|s| serde_json::from_str::<serde_json::Value>(&s).ok());
let name = row
.name
.clone()
.or_else(|| extract_name(&yaml));
Ok(FlowDoc { id: row.id, yaml, design_json, name, code: row.code, remark: row.remark })
}
pub async fn get_by_code(db: &Db, code: &str) -> anyhow::Result<FlowDoc> {
let row = db_flow::Entity::find()
.filter(db_flow::Column::Code.eq(code))
.one(db)
.await?;
let row = row.ok_or_else(|| anyhow::anyhow!("flow not found with code: {}", code))?;
let yaml = row.yaml.unwrap_or_default();
let design_json = row.design_json.and_then(|s| serde_json::from_str::<serde_json::Value>(&s).ok());
// 名称兜底:数据库 name 为空时,尝试从 YAML 提取
let name = row
.name
.clone()
.or_else(|| extract_name(&yaml));
Ok(FlowDoc { id: row.id, yaml, design_json, name, code: row.code, remark: row.remark })
}
pub async fn update(db: &Db, id: i64, req: FlowUpdateReq) -> anyhow::Result<FlowDoc> {
if let Some(yaml) = &req.yaml {
let _parsed: FlowDSL = serde_yaml::from_str(yaml).context("invalid flow yaml")?;
}
let row = db_flow::Entity::find_by_id(id).one(db).await?;
let Some(row) = row else { return Err(anyhow::anyhow!("not found")); };
let mut am: db_flow::ActiveModel = row.into();
if let Some(yaml) = req.yaml {
let next_name = req
.name
.or_else(|| extract_name(&yaml));
if let Some(n) = next_name { am.name = Set(Some(n)); }
am.yaml = Set(Some(yaml.clone()));
} else if let Some(n) = req.name { am.name = Set(Some(n)); }
if let Some(dj) = req.design_json { let s = serde_json::to_string(&dj)?; am.design_json = Set(Some(s)); }
if let Some(c) = req.code { am.code = Set(Some(c)); }
if let Some(r) = req.remark { am.remark = Set(Some(r)); }
am.updated_at = Set(Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap()));
am.update(db).await?;
let got = db_flow::Entity::find_by_id(id).one(db).await?.unwrap();
let dj = got.design_json.as_deref().and_then(|s| serde_json::from_str::<serde_json::Value>(&s).ok());
Ok(FlowDoc { id, yaml: got.yaml.unwrap_or_default(), design_json: dj, name: got.name, code: got.code, remark: got.remark })
}
pub async fn delete(db: &Db, id: i64) -> anyhow::Result<()> {
let row = db_flow::Entity::find_by_id(id).one(db).await?;
let Some(row) = row else { return Err(anyhow::anyhow!("not found")); };
let am: db_flow::ActiveModel = row.into();
am.delete(db).await?;
Ok(())
}
pub async fn run(db: &Db, id: i64, req: RunReq, operator: Option<(i64, String)>) -> anyhow::Result<RunResult> {
let log_handler = DatabaseLogHandler::new(db.clone());
match run_internal(db, id, req, operator, &log_handler, None).await {
Ok((ctx, logs)) => Ok(RunResult { ok: true, ctx, logs }),
Err(e) => {
if let Some(de) = e.downcast_ref::<DriveError>().cloned() {
Ok(RunResult { ok: false, ctx: de.ctx, logs: de.logs })
} else {
let mut full = e.to_string();
for cause in e.chain().skip(1) { full.push_str(" | "); full.push_str(&cause.to_string()); }
Ok(RunResult { ok: false, ctx: serde_json::json!({}), logs: vec![full] })
}
}
}
}
// 新增:流式运行,向外发送节点事件与最终完成事件
pub async fn run_with_stream(
db: Db,
id: i64,
req: RunReq,
operator: Option<(i64, String)>,
event_tx: Sender<StreamEvent>,
) -> anyhow::Result<()> {
let tx_done = event_tx.clone();
let log_handler = SseLogHandler::new(db.clone(), event_tx.clone());
match run_internal(&db, id, req, operator, &log_handler, Some(event_tx)).await {
Ok((_ctx, _logs)) => Ok(()),
Err(e) => {
if let Some(de) = e.downcast_ref::<DriveError>().cloned() {
crate::middlewares::sse::emit_done(&tx_done, false, de.ctx, de.logs).await;
} else {
let mut full = e.to_string();
for cause in e.chain().skip(1) { full.push_str(" | "); full.push_str(&cause.to_string()); }
crate::middlewares::sse::emit_done(&tx_done, false, serde_json::json!({}), vec![full]).await;
}
Ok(())
}
}
}
// 内部统一的运行方法
async fn run_internal(
db: &Db,
id: i64,
req: RunReq,
operator: Option<(i64, String)>,
log_handler: &dyn FlowLogHandler,
event_tx: Option<Sender<StreamEvent>>,
) -> anyhow::Result<(serde_json::Value, Vec<String>)> {
info!(target = "udmin", "flow.run_internal: start id={}", id);
let start = Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap());
let flow_code: Option<String> = match db_flow::Entity::find_by_id(id).one(db).await { Ok(Some(row)) => row.code, _ => None };
let doc = match get(db, id).await {
Ok(d) => d,
Err(e) => {
error!(target = "udmin", error = ?e, "flow.run_internal: get doc failed id={}", id);
let error_msg = format!("get doc failed: {}", e);
log_handler.log_error(id, flow_code.as_deref(), &req.input, &error_msg, operator, start, 0).await?;
return Err(e);
}
};
info!(target = "udmin", "flow.run_internal: doc loaded id={} has_design_json={} yaml_len={}", id, doc.design_json.is_some(), doc.yaml.len());
// 构建 chain 与 ctx
let mut exec_mode: ExecutionMode = ExecutionMode::Sync;
let (mut chain, mut ctx) = if let Some(design) = &doc.design_json {
let chain_from_json = match flow::dsl::chain_from_design_json(design) {
Ok(c) => c,
Err(e) => {
error!(target = "udmin", error = ?e, "flow.run_internal: build chain from design_json failed id={}", id);
let error_msg = format!("build chain from design_json failed: {}", e);
log_handler.log_error(id, flow_code.as_deref(), &req.input, &error_msg, operator, start, 0).await?;
return Err(e);
}
};
let mut ctx = req.input.clone();
let supplement = flow::mappers::ctx_from_design_json(design);
merge_json(&mut ctx, &supplement);
let mode_str = design.get("executionMode").and_then(|v| v.as_str())
.or_else(|| design.get("execution_mode").and_then(|v| v.as_str()))
.unwrap_or("sync");
exec_mode = parse_execution_mode(mode_str);
(chain_from_json, ctx)
} else {
let dsl = match serde_yaml::from_str::<FlowDSL>(&doc.yaml) {
Ok(d) => d,
Err(e) => {
error!(target = "udmin", error = ?e, "flow.run_internal: parse YAML failed id={}", id);
let error_msg = format!("parse YAML failed: {}", e);
log_handler.log_error(id, flow_code.as_deref(), &req.input, &error_msg, operator, start, 0).await?;
return Err(anyhow::Error::new(e).context("invalid flow yaml"));
}
};
if let Some(m) = dsl.execution_mode.as_deref() { exec_mode = parse_execution_mode(m); }
(dsl.into(), req.input.clone())
};
// 兜底回退
if chain.nodes.is_empty() {
if !doc.yaml.trim().is_empty() {
match serde_yaml::from_str::<FlowDSL>(&doc.yaml) {
Ok(dsl) => {
chain = dsl.clone().into();
ctx = req.input.clone();
if let Some(m) = dsl.execution_mode.as_deref() { exec_mode = parse_execution_mode(m); }
}
Err(e) => {
let error_msg = format!("fallback parse YAML failed: {}", e);
log_handler.log_error(id, flow_code.as_deref(), &req.input, &error_msg, operator, start, 0).await?;
return Err(anyhow::anyhow!("empty chain: design_json produced no nodes and YAML parse failed"));
}
}
} else {
let error_msg = "empty chain: both design_json and yaml are empty";
log_handler.log_error(id, flow_code.as_deref(), &req.input, error_msg, operator, start, 0).await?;
return Err(anyhow::anyhow!(error_msg));
}
}
// 任务与引擎
let tasks: flow::task::TaskRegistry = flow::task::get_registry();
let engine = FlowEngine::builder().tasks(tasks).build();
// 执行
let drive_res = engine
.drive(&chain, ctx, DriveOptions { execution_mode: exec_mode.clone(), event_tx, ..Default::default() })
.await;
match drive_res {
Ok((mut ctx, logs)) => {
// 移除 variable 节点
if let serde_json::Value::Object(map) = &mut ctx {
if let Some(serde_json::Value::Object(nodes)) = map.get_mut("nodes") {
let keys: Vec<String> = nodes
.iter()
.filter_map(|(k, v)| if v.get("variable").is_some() { Some(k.clone()) } else { None })
.collect();
for k in keys { nodes.remove(&k); }
}
}
let dur = (Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap()) - start).num_milliseconds() as i64;
log_handler.log_success(id, flow_code.as_deref(), &req.input, &ctx, &logs, operator, start, dur).await?;
Ok((ctx, logs))
}
Err(e) => {
error!(target = "udmin", error = ?e, "flow.run_internal: engine drive failed id={}", id);
let dur = (Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap()) - start).num_milliseconds() as i64;
if let Some(de) = e.downcast_ref::<DriveError>().cloned() {
log_handler
.log_error_detail(
id,
flow_code.as_deref(),
&req.input,
&de.ctx,
&de.logs,
&de.message,
operator,
start,
dur,
)
.await?;
} else {
let error_msg = format!("engine drive failed: {}", e);
log_handler
.log_error(
id,
flow_code.as_deref(),
&req.input,
&error_msg,
operator,
start,
dur,
)
.await?;
}
Err(e)
}
}
}
fn extract_name(yaml: &str) -> Option<String> {
for line in yaml.lines() {
let lt = line.trim();
if lt.starts_with("#") && lt.len() > 1 { return Some(lt.trim_start_matches('#').trim().to_string()); }
if lt.starts_with("name:") {
let name = lt.trim_start_matches("name:").trim();
if !name.is_empty() { return Some(name.to_string()); }
}
}
None
}
pub fn ae<E: Into<anyhow::Error>>(e: E) -> AppError {
let err: anyhow::Error = e.into();
let mut full = err.to_string();
for cause in err.chain().skip(1) {
full.push_str(" | ");
full.push_str(&cause.to_string());
}
// MySQL duplicate key example: "Database error: Duplicate entry 'xxx' for key 'idx-unique-flows-code'"
// 也兼容包含唯一索引名/关键字的报错信息
if full.contains("Duplicate entry") || full.contains("idx-unique-flows-code") || (full.contains("code") && full.contains("unique")) {
return AppError::Conflict("流程编码已存在".to_string());
}
AppError::Anyhow(anyhow::anyhow!(full))
}
// shallow merge json objects: a <- b
fn merge_json(a: &mut serde_json::Value, b: &serde_json::Value) {
use serde_json::Value as V;
match (a, b) {
(V::Object(ao), V::Object(bo)) => {
for (k, v) in bo.iter() {
match ao.get_mut(k) {
Some(av) => merge_json(av, v),
None => { ao.insert(k.clone(), v.clone()); }
}
}
}
(a_slot, b_val) => { *a_slot = b_val.clone(); }
}
}
// parse execution mode string
fn parse_execution_mode(s: &str) -> ExecutionMode {
match s.to_ascii_lowercase().as_str() {
"async" | "async_fire_and_forget" | "fire_and_forget" => ExecutionMode::AsyncFireAndForget,
_ => ExecutionMode::Sync,
}
}

View File

@ -17,7 +17,7 @@ pub struct CreateLogInput {
pub async fn create(db: &Db, input: CreateLogInput) -> anyhow::Result<i64> { pub async fn create(db: &Db, input: CreateLogInput) -> anyhow::Result<i64> {
let am = request_log::ActiveModel { let am = request_log::ActiveModel {
id: Default::default(), id: Set(crate::utils::generate_request_log_id()),
path: Set(input.path), path: Set(input.path),
method: Set(input.method), method: Set(input.method),
request_params: Set(input.request_params), request_params: Set(input.request_params),
@ -69,3 +69,13 @@ pub async fn list(db: &Db, p: ListParams) -> anyhow::Result<PageResp<LogInfo>> {
let models = paginator.fetch_page(if page>0 { page-1 } else { 0 }).await?; let models = paginator.fetch_page(if page>0 { page-1 } else { 0 }).await?;
Ok(PageResp { items: models.into_iter().map(Into::into).collect(), total, page, page_size }) Ok(PageResp { items: models.into_iter().map(Into::into).collect(), total, page, page_size })
} }
// 新增:批量删除系统请求日志
pub async fn delete_many(db: &Db, ids: Vec<i64>) -> anyhow::Result<u64> {
if ids.is_empty() { return Ok(0); }
let res = request_log::Entity::delete_many()
.filter(request_log::Column::Id.is_in(ids))
.exec(db)
.await?;
Ok(res.rows_affected as u64)
}

View File

@ -3,6 +3,8 @@ pub mod user_service;
pub mod role_service; pub mod role_service;
pub mod menu_service; pub mod menu_service;
pub mod department_service; pub mod department_service;
pub mod log_service;
// 新增岗位服务
pub mod position_service; pub mod position_service;
pub mod log_service;
pub mod flow_service;
pub mod flow_run_log_service;
pub mod schedule_job_service;

View File

@ -0,0 +1,315 @@
//! 模块定时任务服务Service Layer
//! 职责:
//! 1) 负责定时任务schedule_jobs的数据库增删改查
//! 2) 在创建/更新/删除后,与调度器进行同步(注册、更新、移除);
//! 3) 服务启动时加载已启用任务并注册到调度器;
//! 4) 构建任务执行闭包JobExecutor内部做运行期防御与流程触发。
use std::{future::Future, pin::Pin, sync::Arc};
use chrono::{DateTime, FixedOffset, Utc};
use sea_orm::{
ActiveModelTrait, ColumnTrait, EntityTrait, PaginatorTrait, QueryFilter, QueryOrder, Set,
};
use tokio_cron_scheduler::Job;
use tracing::{error, info};
use crate::{
db::Db,
error::AppError,
models::schedule_job,
services::flow_service,
utils,
};
/// 通用分页响应体
#[derive(serde::Serialize)]
pub struct PageResp<T> {
pub items: Vec<T>,
pub total: u64,
pub page: u64,
pub page_size: u64,
}
/// 任务文档DTO
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug)]
pub struct ScheduleJobDoc {
pub id: String,
pub name: String,
pub cron_expr: String,
pub enabled: bool,
pub flow_code: String,
pub created_at: DateTime<FixedOffset>,
pub updated_at: DateTime<FixedOffset>,
}
impl From<schedule_job::Model> for ScheduleJobDoc {
fn from(m: schedule_job::Model) -> Self {
Self {
id: m.id,
name: m.name,
cron_expr: m.cron_expr,
enabled: m.enabled,
flow_code: m.flow_code,
created_at: m.created_at,
updated_at: m.updated_at,
}
}
}
/// 列表查询参数
#[derive(serde::Deserialize)]
pub struct ListParams {
pub page: Option<u64>,
pub page_size: Option<u64>,
pub keyword: Option<String>,
pub enabled: Option<bool>,
}
/// 创建任务请求体
#[derive(serde::Deserialize)]
pub struct CreateReq {
pub name: String,
pub cron_expr: String,
pub enabled: bool,
pub flow_code: String,
}
/// 更新任务请求体(部分字段可选)
#[derive(serde::Deserialize)]
pub struct UpdateReq {
pub name: Option<String>,
pub cron_expr: Option<String>,
pub enabled: Option<bool>,
pub flow_code: Option<String>,
}
/// 获取当前 UTC 时间并转为固定偏移(避免多处重复)
fn now_fixed_offset() -> DateTime<FixedOffset> {
Utc::now().with_timezone(&FixedOffset::east_opt(0).unwrap())
}
/// 分页查询任务列表,支持按名称关键字与启用状态筛选
pub async fn list(db: &Db, p: ListParams) -> Result<PageResp<ScheduleJobDoc>, AppError> {
let page = p.page.unwrap_or(1);
let page_size = p.page_size.unwrap_or(10);
let mut query = schedule_job::Entity::find();
if let Some(k) = p.keyword {
query = query.filter(schedule_job::Column::Name.contains(&k));
}
if let Some(e) = p.enabled {
query = query.filter(schedule_job::Column::Enabled.eq(e));
}
let paginator = query
.order_by_desc(schedule_job::Column::UpdatedAt)
.paginate(db, page_size);
let total = paginator.num_items().await?;
let docs: Vec<ScheduleJobDoc> = paginator
.fetch_page(page.saturating_sub(1))
.await?
.into_iter()
.map(Into::into)
.collect();
let sample: Vec<_> = docs.iter().take(5).map(|d| (d.id.clone(), d.enabled)).collect();
info!(
target = "udmin",
total = %total,
page = %page,
page_size = %page_size,
sample = ?sample,
"schedule_jobs.list"
);
Ok(PageResp { items: docs, total, page, page_size })
}
/// 创建任务
pub async fn create(db: &Db, req: CreateReq) -> Result<ScheduleJobDoc, AppError> {
// 1) 校验 cron 表达式
Job::new_async(&req.cron_expr, |_uuid, _l| Box::pin(async {}))
.map_err(|e| AppError::BadRequest(format!("无效的 cron 表达式: {e}")))?;
// 2) 校验 flow_code 唯一启用性
if req.enabled
&& schedule_job::Entity::find()
.filter(schedule_job::Column::FlowCode.eq(req.flow_code.clone()))
.filter(schedule_job::Column::Enabled.eq(true))
.one(db)
.await?
.is_some()
{
return Err(AppError::Conflict("同一 flow_code 已存在启用中的任务".into()));
}
// 3) 入库
let am = schedule_job::ActiveModel {
id: Set(uuid::Uuid::new_v4().to_string()),
name: Set(req.name),
cron_expr: Set(req.cron_expr),
enabled: Set(req.enabled),
flow_code: Set(req.flow_code),
created_at: Set(now_fixed_offset()),
updated_at: Set(now_fixed_offset()),
};
let m = am.insert(db).await?;
// 4) 同步调度器
let executor = build_executor_for_job(db, &m);
utils::add_or_update_job_by_model(&m, executor).await.map_err(AppError::Anyhow)?;
Ok(m.into())
}
/// 更新任务
pub async fn update(db: &Db, id: &str, req: UpdateReq) -> Result<ScheduleJobDoc, AppError> {
let m = schedule_job::Entity::find_by_id(id.to_string())
.one(db)
.await?
.ok_or(AppError::NotFound)?;
let next_name = req.name.unwrap_or_else(|| m.name.clone());
let next_cron = req.cron_expr.unwrap_or_else(|| m.cron_expr.clone());
let next_enabled = req.enabled.unwrap_or(m.enabled);
let next_flow_code = req.flow_code.unwrap_or_else(|| m.flow_code.clone());
Job::new_async(&next_cron, |_uuid, _l| Box::pin(async {}))
.map_err(|e| AppError::BadRequest(format!("无效的 cron 表达式: {e}")))?;
if next_enabled
&& schedule_job::Entity::find()
.filter(schedule_job::Column::FlowCode.eq(next_flow_code.clone()))
.filter(schedule_job::Column::Enabled.eq(true))
.filter(schedule_job::Column::Id.ne(id.to_string()))
.one(db)
.await?
.is_some()
{
return Err(AppError::Conflict("同一 flow_code 已存在启用中的任务".into()));
}
info!(
target = "udmin",
id = %m.id,
prev_enabled = %m.enabled,
next_enabled = %next_enabled,
"schedule_jobs.update.apply"
);
let mut am: schedule_job::ActiveModel = m.into();
am.name = Set(next_name);
am.cron_expr = Set(next_cron);
am.enabled = Set(next_enabled);
am.flow_code = Set(next_flow_code);
am.updated_at = Set(now_fixed_offset());
let updated = am.update(db).await?;
info!(
target = "udmin",
id = %updated.id,
enabled = %updated.enabled,
"schedule_jobs.update.persisted"
);
let executor = build_executor_for_job(db, &updated);
utils::add_or_update_job_by_model(&updated, executor).await.map_err(AppError::Anyhow)?;
Ok(updated.into())
}
/// 删除任务
pub async fn remove(db: &Db, id: &str) -> Result<(), AppError> {
utils::remove_job_by_id(id).await.map_err(AppError::Anyhow)?;
let res = schedule_job::Entity::delete_by_id(id.to_string()).exec(db).await?;
if res.rows_affected == 0 {
return Err(AppError::NotFound);
}
Ok(())
}
/// 启动时加载并注册所有启用任务
pub async fn init_load_enabled_and_register(db: &Db) -> Result<(), AppError> {
let enabled_jobs = schedule_job::Entity::find()
.filter(schedule_job::Column::Enabled.eq(true))
.all(db)
.await?;
info!(
target = "udmin",
count = enabled_jobs.len(),
"schedule_jobs.init.load_enabled"
);
for m in enabled_jobs {
let executor = build_executor_for_job(db, &m);
if let Err(e) = utils::add_or_update_job_by_model(&m, executor).await {
error!(
target = "udmin",
id = %m.id,
error = %e,
"schedule_jobs.init.add_failed"
);
}
}
Ok(())
}
/// 构建任务执行闭包JobExecutor
fn build_executor_for_job(db: &Db, m: &schedule_job::Model) -> utils::JobExecutor {
let db = db.clone();
let job_id = m.id.clone();
let job_name = m.name.clone();
let flow_code = m.flow_code.clone();
Arc::new(move || {
let db = db.clone();
let job_id = job_id.clone();
let job_name = job_name.clone();
let flow_code = flow_code.clone();
Box::pin(async move {
// 运行期防御
match schedule_job::Entity::find_by_id(job_id.clone()).one(&db).await {
Ok(Some(model)) if !model.enabled => {
info!(target = "udmin", job = %job_name, id = %job_id, "scheduler.tick.skip_disabled");
return;
}
Ok(None) => {
info!(target = "udmin", job = %job_name, id = %job_id, "scheduler.tick.deleted_remove_self");
if let Err(e) = utils::remove_job_by_id(&job_id).await {
error!(target = "udmin", id = %job_id, error = %e, "scheduler.self_remove.failed");
}
return;
}
Err(e) => {
error!(target = "udmin", job = %job_name, id = %job_id, error = %e, "scheduler.tick.check_failed");
return;
}
_ => {}
}
// 触发流程执行
info!(target = "udmin", job = %job_name, flow_code = %flow_code, "scheduler.tick.start");
match flow_service::get_by_code(&db, &flow_code).await {
Ok(doc) => {
let id = doc.id.clone();
if let Err(e) = flow_service::run(
&db,
id,
flow_service::RunReq { input: serde_json::json!({}) },
Some((0, "调度".to_string())),
).await {
error!(target = "udmin", job = %job_name, flow_code = %flow_code, error = %e, "scheduler.tick.run_failed");
}
}
Err(e) => {
error!(target = "udmin", job = %job_name, flow_code = %flow_code, error = %e, "scheduler.tick.flow_not_found");
}
}
}) as Pin<Box<dyn Future<Output = ()> + Send>>
})
}

232
backend/src/utils/ids.rs Normal file
View File

@ -0,0 +1,232 @@
use std::sync::Mutex;
use once_cell::sync::Lazy;
// 采用 rs-snowflake 提供的 SnowflakeIdGenerator
// machine_id 与 node_id 取值范围建议 < 32
use snowflake::SnowflakeIdGenerator;
// 全局生成器(按需初始化),以避免重复构造与时序问题
static GENERATOR: Lazy<Mutex<SnowflakeIdGenerator>> = Lazy::new(|| {
let (machine_id, node_id) = read_ids_from_env();
Mutex::new(SnowflakeIdGenerator::new(machine_id, node_id))
});
fn read_ids_from_env() -> (i32, i32) {
let machine_id = std::env::var("ID_MACHINE_ID").ok().and_then(|s| s.parse::<i32>().ok()).unwrap_or(1);
let node_id = std::env::var("ID_NODE_ID").ok().and_then(|s| s.parse::<i32>().ok()).unwrap_or(1);
// 保护:按 0..32 范围裁剪,避免越界(生成器要求 machine_id/node_id < 32
let clamp = |v: i32| if v < 0 { 0 } else if v > 31 { 31 } else { v };
(clamp(machine_id), clamp(node_id))
}
/// 可选:在启动时主动初始化并打印日志,便于观测
pub fn init_from_env() {
let (machine_id, node_id) = read_ids_from_env();
// 触发 Lazy 初始化(不获取锁,避免非绑定锁 lint
Lazy::force(&GENERATOR);
tracing::info!(target: "udmin", "snowflake init: machine_id={} node_id={}", machine_id, node_id);
}
/// 业务ID生成器按示例主业务16位子业务8位Snowflake保留低39位 => 共63位最高符号位为0
pub struct BizIdConfig {
pub main_id: u16, // 16 bits
pub sub_id: u8, // 8 bits
}
impl BizIdConfig {
pub const fn new(main_id: u16, sub_id: u8) -> Self { Self { main_id, sub_id } }
}
/// 生成带业务前缀的分布式ID返回 i64
pub fn generate_biz_id(cfg: BizIdConfig) -> i64 {
let mut g = GENERATOR.lock().expect("gen snowflake id");
let base_id = g.real_time_generate();
// 只保留低 39 位
let snowflake_bits = base_id & ((1i64 << 39) - 1);
let main_bits = (cfg.main_id as i64) << (39 + 8);
let sub_bits = (cfg.sub_id as i64) << 39;
main_bits | sub_bits | snowflake_bits
}
/// 解析业务ID -> (main_id, sub_id, base_id)
pub fn parse_biz_id(id: i64) -> (u16, u8, i64) {
let main_id = ((id >> (39 + 8)) & 0xFFFF) as u16;
let sub_id = ((id >> 39) & 0xFF) as u8;
let base_id = id & ((1i64 << 39) - 1);
(main_id, sub_id, base_id)
}
// --- 具体业务场景main/sub 为 1/1 的通用 ID 场景 ---
const FLOW_MAIN_ID: u16 = 1;
const FLOW_SUB_ID: u8 = 1;
/// 通用 ID 生成main_id=1、sub_id=1返回十进制字符串与原先 string 类型主键兼容)
pub fn generate_id() -> i64 {
generate_biz_id(BizIdConfig::new(FLOW_MAIN_ID, FLOW_SUB_ID))
}
// --- 日志类 ID 的业务位定义与生成 ---
const FLOW_RUN_LOG_MAIN_ID: u16 = 2;
const FLOW_RUN_LOG_SUB_ID: u8 = 1;
pub fn generate_flow_run_log_id() -> i64 {
generate_biz_id(BizIdConfig::new(FLOW_RUN_LOG_MAIN_ID, FLOW_RUN_LOG_SUB_ID))
}
const REQUEST_LOG_MAIN_ID: u16 = 3;
const REQUEST_LOG_SUB_ID: u8 = 1;
pub fn generate_request_log_id() -> i64 {
generate_biz_id(BizIdConfig::new(REQUEST_LOG_MAIN_ID, REQUEST_LOG_SUB_ID))
}
#[cfg(test)]
mod tests {
use super::*;
use std::thread;
use std::time::Duration;
use std::collections::HashSet;
#[test]
fn test_id_sequential_generation() {
// 测试1: 连续生成ID验证递增性
let mut prev_id = 0i64;
for i in 1..=10 {
let current_id = generate_id();
println!("ID {}: {}", i, current_id);
if i > 1 {
assert!(current_id > prev_id,
"ID {} ({}) 应该大于前一个ID {} ({})", i, current_id, i-1, prev_id);
}
prev_id = current_id;
}
}
#[test]
fn test_id_time_interval_generation() {
// 测试2: 间隔时间生成ID验证时间戳影响
let mut time_based_ids = Vec::new();
for i in 1..=5 {
let id = generate_id();
time_based_ids.push(id);
println!("时间间隔ID {}: {}", i, id);
thread::sleep(Duration::from_millis(10)); // 减少等待时间以加快测试
}
// 验证时间间隔ID的递增性
for i in 1..time_based_ids.len() {
assert!(time_based_ids[i] > time_based_ids[i-1],
"时间间隔ID {} ({}) 应该大于前一个ID ({})",
i+1, time_based_ids[i], time_based_ids[i-1]);
}
}
#[test]
fn test_different_business_id_types() {
// 测试3: 不同业务类型ID的递增性
let flow_id1 = generate_biz_id(BizIdConfig::new(1, 1));
thread::sleep(Duration::from_millis(1));
let flow_id2 = generate_biz_id(BizIdConfig::new(1, 1));
thread::sleep(Duration::from_millis(1));
let log_id1 = generate_biz_id(BizIdConfig::new(2, 1));
thread::sleep(Duration::from_millis(1));
let log_id2 = generate_biz_id(BizIdConfig::new(2, 1));
println!("Flow ID 1: {}", flow_id1);
println!("Flow ID 2: {}", flow_id2);
println!("Log ID 1: {}", log_id1);
println!("Log ID 2: {}", log_id2);
// 验证同类型业务ID递增
assert!(flow_id2 > flow_id1, "Flow ID 2 应该大于 Flow ID 1");
assert!(log_id2 > log_id1, "Log ID 2 应该大于 Log ID 1");
}
#[test]
fn test_concurrent_id_generation() {
// 测试4: 多线程并发生成ID测试
let handles: Vec<_> = (0..3).map(|thread_id| {
thread::spawn(move || {
let mut thread_ids = Vec::new();
for _ in 0..5 {
thread_ids.push(generate_id());
thread::sleep(Duration::from_millis(1));
}
(thread_id, thread_ids)
})
}).collect();
let mut all_ids = Vec::new();
for handle in handles {
let (thread_id, ids) = handle.join().unwrap();
println!("线程 {} 生成的ID: {:?}", thread_id, ids);
all_ids.extend(ids);
}
// 验证所有ID的唯一性
let unique_ids: HashSet<_> = all_ids.iter().collect();
assert_eq!(all_ids.len(), unique_ids.len(),
"发现重复ID总数: {}, 唯一数: {}", all_ids.len(), unique_ids.len());
}
#[test]
fn test_id_timestamp_parsing() {
// 测试5: 解析ID验证时间戳部分
let id1 = generate_id();
thread::sleep(Duration::from_millis(10));
let id2 = generate_id();
// 提取时间戳部分低39位中的时间戳
let timestamp1 = id1 & ((1i64 << 39) - 1);
let timestamp2 = id2 & ((1i64 << 39) - 1);
println!("ID1: {}, 时间戳部分: {}", id1, timestamp1);
println!("ID2: {}, 时间戳部分: {}", id2, timestamp2);
// 注意在同一毫秒内生成的ID时间戳部分可能相同但序列号会递增
// 所以这里只验证ID2不小于ID1
assert!(id2 > id1, "ID2 应该大于 ID1");
}
#[test]
fn test_biz_id_parsing() {
// 测试6: 业务ID解析功能
let config = BizIdConfig::new(123, 45);
let id = generate_biz_id(config);
let (main_id, sub_id, base_id) = parse_biz_id(id);
assert_eq!(main_id, 123, "解析的main_id应该等于123");
assert_eq!(sub_id, 45, "解析的sub_id应该等于45");
assert!(base_id > 0, "解析的base_id应该大于0");
println!("原始ID: {}, 解析结果: main_id={}, sub_id={}, base_id={}",
id, main_id, sub_id, base_id);
}
#[test]
fn test_specific_id_generators() {
// 测试7: 特定业务ID生成器
let flow_log_id1 = generate_flow_run_log_id();
let flow_log_id2 = generate_flow_run_log_id();
let request_log_id1 = generate_request_log_id();
let request_log_id2 = generate_request_log_id();
// 验证递增性
assert!(flow_log_id2 > flow_log_id1, "流程日志ID应该递增");
assert!(request_log_id2 > request_log_id1, "请求日志ID应该递增");
// 验证业务类型解析
let (main_id, sub_id, _) = parse_biz_id(flow_log_id1);
assert_eq!(main_id, FLOW_RUN_LOG_MAIN_ID, "流程日志ID的main_id应该正确");
assert_eq!(sub_id, FLOW_RUN_LOG_SUB_ID, "流程日志ID的sub_id应该正确");
let (main_id, sub_id, _) = parse_biz_id(request_log_id1);
assert_eq!(main_id, REQUEST_LOG_MAIN_ID, "请求日志ID的main_id应该正确");
assert_eq!(sub_id, REQUEST_LOG_SUB_ID, "请求日志ID的sub_id应该正确");
println!("流程日志ID: {}, {}", flow_log_id1, flow_log_id2);
println!("请求日志ID: {}, {}", request_log_id1, request_log_id2);
}
}

View File

@ -1 +1,6 @@
pub mod password; pub mod password;
pub mod ids;
pub mod scheduler;
pub use ids::{init_from_env, generate_biz_id, parse_biz_id, generate_id, BizIdConfig, generate_flow_run_log_id, generate_request_log_id};
pub use scheduler::{init_and_start as init_scheduler, add_or_update_job_by_model, remove_job_by_id, JobExecutor};

View File

@ -1,6 +1,6 @@
pub fn hash_password(plain: &str) -> anyhow::Result<String> { pub fn hash_password(plain: &str) -> anyhow::Result<String> {
use argon2::{password_hash::{SaltString, PasswordHasher}, Argon2}; use argon2::{password_hash::{SaltString, PasswordHasher, rand_core::OsRng}, Argon2};
let salt = SaltString::generate(&mut rand::thread_rng()); let salt = SaltString::generate(&mut OsRng);
let hashed = Argon2::default() let hashed = Argon2::default()
.hash_password(plain.as_bytes(), &salt) .hash_password(plain.as_bytes(), &salt)
.map_err(|e| anyhow::anyhow!(e.to_string()))? .map_err(|e| anyhow::anyhow!(e.to_string()))?

View File

@ -0,0 +1,99 @@
use std::collections::HashMap;
use std::future::Future;
use std::pin::Pin;
use std::sync::Arc;
use once_cell::sync::OnceCell;
use tokio::sync::Mutex;
use tokio_cron_scheduler::{JobScheduler, Job};
use tracing::{error, info};
use uuid::Uuid;
use crate::models::schedule_job;
static SCHEDULER: OnceCell<Mutex<JobScheduler>> = OnceCell::new();
static JOB_GUIDS: OnceCell<Mutex<HashMap<String, Uuid>>> = OnceCell::new();
pub type JobExecutor = Arc<dyn Fn() -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + Sync>;
fn scheduler() -> &'static Mutex<JobScheduler> {
SCHEDULER
.get()
.expect("Scheduler not initialized. Call init_and_start() early in main.")
}
fn job_guids() -> &'static Mutex<HashMap<String, Uuid>> {
JOB_GUIDS
.get()
.expect("JOB_GUIDS not initialized. Call init_and_start() early in main.")
}
pub async fn init_and_start() -> anyhow::Result<()> {
if SCHEDULER.get().is_some() {
return Ok(());
}
let js = JobScheduler::new().await?;
SCHEDULER.set(Mutex::new(js)).ok().expect("set scheduler once");
JOB_GUIDS.set(Mutex::new(HashMap::new())).ok().expect("set job_guids once");
// 仅启动调度器,不进行任何数据库加载,数据库加载应由 service 层负责
let lock = scheduler().lock().await;
lock.start().await?;
drop(lock);
info!(target = "udmin", "scheduler started");
Ok(())
}
/// 新增或更新一个任务(根据 model.id 作为唯一标识)。
/// - 如果已存在,则先移除旧任务再按最新 cron 重新创建;
/// - 当 enabled=false 时,仅执行移除逻辑,不会重新添加。
pub async fn add_or_update_job_by_model(m: &schedule_job::Model, executor: JobExecutor) -> anyhow::Result<()> {
// 如果已有旧的 job先移除
if let Some(old) = { job_guids().lock().await.get(&m.id).cloned() } {
let js = scheduler().lock().await;
if let Err(e) = js.remove(&old).await {
error!(target = "udmin", id = %m.id, guid = %old, error = %e, "scheduler.remove old failed");
}
job_guids().lock().await.remove(&m.id);
}
if !m.enabled {
return Ok(());
}
// 构造异步 Job仅负责调度触发不做数据库操作
let cron_expr = m.cron_expr.clone();
let name = m.name.clone();
let job_id = m.id.clone();
let exec_arc = executor.clone();
let job = Job::new_async(cron_expr.as_str(), move |_uuid, _l| {
let job_name = name.clone();
let job_id = job_id.clone();
let exec = exec_arc.clone();
Box::pin(async move {
info!(target = "udmin", job = %job_name, id = %job_id, "scheduler.tick: start");
exec().await;
})
})?;
let guid = job.guid();
{
let js = scheduler().lock().await;
js.add(job).await?;
}
job_guids().lock().await.insert(m.id.clone(), guid);
info!(target = "udmin", id = %m.id, guid = %guid, "scheduler.add: ok");
Ok(())
}
pub async fn remove_job_by_id(id: &str) -> anyhow::Result<()> {
if let Some(g) = { job_guids().lock().await.get(id).cloned() } {
let js = scheduler().lock().await;
if let Err(e) = js.remove(&g).await {
error!(target = "udmin", id = %id, guid = %g, error = %e, "scheduler.remove: failed");
}
job_guids().lock().await.remove(id);
info!(target = "udmin", id = %id, guid = %g, "scheduler.remove: ok");
}
Ok(())
}

View File

@ -1 +0,0 @@
use argon2::{Argon2, PasswordHasher}; use argon2::password_hash::{SaltString, rand_core::OsRng}; fn main() { let argon2 = Argon2::default(); let salt = SaltString::generate(&mut OsRng); let password_hash = argon2.hash_password(b"123456", &salt).unwrap().to_string(); println!("{}", password_hash); }

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

31
check.git.sh Normal file
View File

@ -0,0 +1,31 @@
#!/bin/bash
DOMAIN="git.baqi.dev"
PORT=443
echo "🔍 检测 $DOMAIN:$PORT 的 SSL/TLS 状态..."
echo "--------------------------------------------------"
# 1. 显示证书链
echo -e "\n📜 证书链信息:"
openssl s_client -connect $DOMAIN:$PORT -servername $DOMAIN -showcerts </dev/null 2>/dev/null | openssl x509 -noout -issuer -subject -dates
# 2. 检查证书过期时间
echo -e "\n⏳ 证书有效期:"
openssl s_client -connect $DOMAIN:$PORT -servername $DOMAIN </dev/null 2>/dev/null | openssl x509 -noout -dates
# 3. 测试 TLS 协议兼容性
for v in ssl3 tls1 tls1_1 tls1_2 tls1_3; do
echo -ne "\n⚡ 测试 $v ... "
result=$(openssl s_client -connect $DOMAIN:$PORT -servername $DOMAIN -$v </dev/null 2>&1)
if echo "$result" | grep -q "Cipher is"; then
echo "✅ 支持"
else
echo "❌ 不支持"
fi
done
# 4. 用 curl 模拟 git
echo -e "\n🌐 使用 curl 模拟访问:"
curl -vk https://$DOMAIN/ 2>&1 | grep "SSL" || echo "curl 请求正常"
echo -e "\n✅ 检测完成。"

View File

@ -0,0 +1,308 @@
# 后端架构文档
## 概述
后端采用 Rust + Axum 构建,遵循分层架构设计,包含数据访问层、业务逻辑层、路由层和中间件层。
## 核心模块
### 1. 应用入口 (main.rs)
**职责**: 应用启动、服务配置、中间件注册
**主要功能**:
- 数据库连接初始化
- Redis 连接配置
- CORS 跨域设置
- 多端口服务启动 (HTTP/WebSocket/SSE)
- 日志中间件注册
**服务端口**:
- HTTP API: 9898 (默认)
- WebSocket: 8877 (默认)
- SSE: 8866 (默认)
### 2. 数据库层 (db.rs)
**职责**: 数据库连接管理和配置
**特性**:
- 支持多种数据库 (MySQL/PostgreSQL/SQLite)
- 连接池管理
- 事务支持
- 自动迁移
### 3. 错误处理 (error.rs)
**职责**: 统一错误类型定义和处理
**错误类型**:
- `DatabaseError`: 数据库操作错误
- `ValidationError`: 数据验证错误
- `AuthenticationError`: 认证错误
- `AuthorizationError`: 授权错误
- `NotFoundError`: 资源不存在
- `InternalServerError`: 内部服务器错误
### 4. 响应格式 (response.rs)
**职责**: 统一 API 响应格式
**响应结构**:
```rust
pub struct ApiResponse<T> {
pub code: i32,
pub message: String,
pub data: Option<T>,
}
pub struct PageResponse<T> {
pub items: Vec<T>,
pub total: u64,
pub page: u64,
pub page_size: u64,
}
```
## 业务模块
### Models (数据模型层)
**位置**: `src/models/`
**模型列表**:
- `user.rs`: 用户模型
- `role.rs`: 角色模型
- `menu.rs`: 菜单模型
- `department.rs`: 部门模型
- `position.rs`: 职位模型
- `flow.rs`: 流程模型
- `schedule_job.rs`: 定时任务模型
- `flow_run_log.rs`: 流程运行日志
- `request_log.rs`: 请求日志
- `refresh_token.rs`: 刷新令牌
**特性**:
- 使用 SeaORM 宏自动生成
- 支持关联查询
- 自动时间戳管理
- 软删除支持
### Services (业务逻辑层)
**位置**: `src/services/`
**服务列表**:
- `auth_service.rs`: 认证服务
- `user_service.rs`: 用户管理服务
- `role_service.rs`: 角色管理服务
- `menu_service.rs`: 菜单管理服务
- `department_service.rs`: 部门管理服务
- `position_service.rs`: 职位管理服务
- `flow_service.rs`: 流程管理服务
- `schedule_job_service.rs`: 定时任务服务
- `flow_run_log_service.rs`: 流程日志服务
- `log_service.rs`: 系统日志服务
**设计原则**:
- 单一职责原则
- 依赖注入
- 异步处理
- 事务管理
### Routes (路由层)
**位置**: `src/routes/`
**路由模块**:
- `auth.rs`: 认证相关路由
- `users.rs`: 用户管理路由
- `roles.rs`: 角色管理路由
- `menus.rs`: 菜单管理路由
- `departments.rs`: 部门管理路由
- `positions.rs`: 职位管理路由
- `flows.rs`: 流程管理路由
- `schedule_jobs.rs`: 定时任务路由
- `flow_run_logs.rs`: 流程日志路由
- `logs.rs`: 系统日志路由
- `dynamic_api.rs`: 动态 API 路由
**路由特性**:
- RESTful API 设计
- 参数验证
- 权限检查
- 分页支持
- 错误处理
### Middlewares (中间件层)
**位置**: `src/middlewares/`
**中间件列表**:
- `jwt.rs`: JWT 认证中间件
- `logging.rs`: 请求日志中间件
- `http_client.rs`: HTTP 客户端中间件
- `ws.rs`: WebSocket 服务中间件
- `sse.rs`: SSE 服务中间件
**功能特性**:
- 请求/响应拦截
- 认证授权
- 日志记录
- 跨域处理
- 实时通信
### Utils (工具模块)
**位置**: `src/utils/`
**工具列表**:
- `ids.rs`: ID 生成器 (Snowflake 算法)
- `password.rs`: 密码哈希工具
- `scheduler.rs`: 任务调度器
**特性**:
- 分布式 ID 生成
- 安全密码处理
- 定时任务管理
## 流程引擎
**位置**: `src/flow/`
### 核心组件
#### 1. 领域模型 (domain.rs)
- `ChainDef`: 流程链定义
- `NodeDef`: 节点定义
- `LinkDef`: 连接定义
- `NodeKind`: 节点类型枚举
#### 2. DSL 解析 (dsl.rs)
- `FlowDSL`: 流程 DSL 结构
- `NodeDSL`: 节点 DSL 结构
- `DesignSyntax`: 设计语法结构
- 校验和构建函数
#### 3. 执行引擎 (engine.rs)
- `FlowEngine`: 流程执行引擎
- `TaskRegistry`: 任务注册表
- `DriveOptions`: 执行选项
- 并发执行支持
#### 4. 执行器 (executors/)
- `http.rs`: HTTP 请求执行器
- `db.rs`: 数据库操作执行器
- `condition.rs`: 条件判断执行器
- `script_js.rs`: JavaScript 脚本执行器
- `script_python.rs`: Python 脚本执行器
- `script_rhai.rs`: Rhai 脚本执行器
- `variable.rs`: 变量操作执行器
#### 5. 上下文管理 (context.rs)
- `StreamEvent`: 流事件定义
- 执行上下文管理
- 事件流处理
#### 6. 日志处理 (log_handler.rs)
- 流程执行日志
- 错误日志记录
- 性能监控
## 数据库设计
### 核心表结构
#### 用户权限相关
- `users`: 用户表
- `roles`: 角色表
- `menus`: 菜单表
- `departments`: 部门表
- `positions`: 职位表
- `user_roles`: 用户角色关联表
- `role_menus`: 角色菜单关联表
- `user_departments`: 用户部门关联表
- `user_positions`: 用户职位关联表
#### 流程相关
- `flows`: 流程表
- `flow_run_logs`: 流程运行日志表
- `schedule_jobs`: 定时任务表
#### 系统相关
- `request_logs`: 请求日志表
- `refresh_tokens`: 刷新令牌表
### 索引策略
- 主键索引
- 外键索引
- 查询优化索引
- 复合索引
## 安全机制
### 认证授权
- JWT 令牌机制
- 刷新令牌支持
- 权限中间件验证
- 角色基础访问控制 (RBAC)
### 数据安全
- Argon2 密码哈希
- SQL 注入防护
- XSS 防护
- CSRF 防护
### 通信安全
- HTTPS 支持
- CORS 配置
- 请求限流
- 日志审计
## 性能优化
### 数据库优化
- 连接池管理
- 查询优化
- 索引策略
- 分页查询
### 缓存策略
- Redis 缓存
- 查询结果缓存
- 会话缓存
### 并发处理
- 异步 I/O
- 任务队列
- 连接复用
- 资源池管理
## 监控和日志
### 日志系统
- 结构化日志 (tracing)
- 分级日志记录
- 请求链路追踪
- 错误堆栈记录
### 监控指标
- 请求响应时间
- 数据库连接状态
- 内存使用情况
- 错误率统计
## 部署配置
### 环境变量
- 数据库连接配置
- Redis 连接配置
- JWT 密钥配置
- 服务端口配置
- CORS 配置
### 容器化
- Docker 支持
- 多阶段构建
- 健康检查
- 资源限制

1056
docs/DATABASE.md Normal file

File diff suppressed because it is too large Load Diff

878
docs/ERROR_HANDLING.md Normal file
View File

@ -0,0 +1,878 @@
# UdminAI 错误处理模块文档
## 概述
UdminAI 项目的错误处理模块提供了统一的错误定义、处理和响应机制。该模块基于 Rust 的 `Result` 类型和 `thiserror` 库构建,确保错误信息的一致性、可追踪性和用户友好性。
## 设计原则
### 核心理念
- **统一性**: 所有模块使用统一的错误类型
- **可追踪性**: 错误包含足够的上下文信息
- **用户友好**: 面向用户的错误消息清晰易懂
- **开发友好**: 面向开发者的错误信息详细准确
- **类型安全**: 编译时错误类型检查
### 错误分层
1. **应用层错误**: 业务逻辑错误
2. **服务层错误**: 服务调用错误
3. **数据层错误**: 数据库和缓存错误
4. **网络层错误**: HTTP 和网络通信错误
5. **系统层错误**: 系统资源和配置错误
## 错误类型定义 (error.rs)
### 主要错误类型
```rust
use axum::{
http::StatusCode,
response::{IntoResponse, Response},
Json,
};
use serde::{Deserialize, Serialize};
use std::fmt;
use thiserror::Error;
use tracing::error;
/// 应用主错误类型
#[derive(Error, Debug, Clone, Serialize, Deserialize)]
pub enum AppError {
// 认证和授权错误
#[error("认证失败: {message}")]
AuthenticationFailed { message: String },
#[error("授权失败: {message}")]
AuthorizationFailed { message: String },
#[error("令牌无效: {message}")]
InvalidToken { message: String },
#[error("令牌已过期")]
TokenExpired,
// 验证错误
#[error("验证失败: {field} - {message}")]
ValidationFailed { field: String, message: String },
#[error("请求参数无效: {message}")]
InvalidRequest { message: String },
#[error("必需字段缺失: {field}")]
MissingField { field: String },
// 资源错误
#[error("资源未找到: {resource}")]
NotFound(String),
#[error("资源已存在: {resource}")]
AlreadyExists(String),
#[error("资源冲突: {message}")]
Conflict { message: String },
// 业务逻辑错误
#[error("业务规则违反: {message}")]
BusinessRuleViolation { message: String },
#[error("操作不被允许: {message}")]
OperationNotAllowed { message: String },
#[error("状态无效: 当前状态 {current}, 期望状态 {expected}")]
InvalidState { current: String, expected: String },
// 数据库错误
#[error("数据库错误: {0}")]
DatabaseError(String),
#[error("数据库连接失败: {message}")]
DatabaseConnectionFailed { message: String },
#[error("事务失败: {message}")]
TransactionFailed { message: String },
// 缓存错误
#[error("缓存错误: {0}")]
CacheError(String),
#[error("缓存连接失败: {message}")]
CacheConnectionFailed { message: String },
// 网络和外部服务错误
#[error("网络错误: {message}")]
NetworkError { message: String },
#[error("外部服务错误: {service} - {message}")]
ExternalServiceError { service: String, message: String },
#[error("HTTP 请求失败: {status} - {message}")]
HttpRequestFailed { status: u16, message: String },
// 文件和 I/O 错误
#[error("文件操作失败: {message}")]
FileOperationFailed { message: String },
#[error("文件未找到: {path}")]
FileNotFound { path: String },
#[error("文件权限不足: {path}")]
FilePermissionDenied { path: String },
// 配置和环境错误
#[error("配置错误: {message}")]
ConfigurationError { message: String },
#[error("环境变量缺失: {variable}")]
MissingEnvironmentVariable { variable: String },
// 序列化和反序列化错误
#[error("序列化失败: {message}")]
SerializationFailed { message: String },
#[error("反序列化失败: {message}")]
DeserializationFailed { message: String },
#[error("JSON 格式错误: {message}")]
JsonFormatError { message: String },
// 流程引擎错误
#[error("流程执行失败: {flow_id} - {message}")]
FlowExecutionFailed { flow_id: String, message: String },
#[error("流程解析失败: {message}")]
FlowParsingFailed { message: String },
#[error("节点执行失败: {node_id} - {message}")]
NodeExecutionFailed { node_id: String, message: String },
// 调度任务错误
#[error("任务调度失败: {job_id} - {message}")]
JobSchedulingFailed { job_id: String, message: String },
#[error("Cron 表达式无效: {expression}")]
InvalidCronExpression { expression: String },
#[error("任务执行超时: {job_id}")]
JobExecutionTimeout { job_id: String },
// 系统错误
#[error("内部服务器错误: {message}")]
InternalServerError { message: String },
#[error("服务不可用: {message}")]
ServiceUnavailable { message: String },
#[error("请求超时: {message}")]
RequestTimeout { message: String },
#[error("资源耗尽: {resource}")]
ResourceExhausted { resource: String },
// 限流和安全错误
#[error("请求频率过高: {message}")]
RateLimitExceeded { message: String },
#[error("请求体过大: 当前大小 {current}, 最大允许 {max}")]
PayloadTooLarge { current: usize, max: usize },
#[error("不支持的媒体类型: {media_type}")]
UnsupportedMediaType { media_type: String },
}
/// 错误响应结构
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ErrorResponse {
pub error: ErrorDetail,
pub timestamp: chrono::DateTime<chrono::Utc>,
pub request_id: Option<String>,
}
/// 错误详情
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ErrorDetail {
pub code: String,
pub message: String,
pub details: Option<serde_json::Value>,
pub field: Option<String>,
}
impl AppError {
/// 获取错误代码
pub fn error_code(&self) -> &'static str {
match self {
// 认证和授权
AppError::AuthenticationFailed { .. } => "AUTH_FAILED",
AppError::AuthorizationFailed { .. } => "AUTHORIZATION_FAILED",
AppError::InvalidToken { .. } => "INVALID_TOKEN",
AppError::TokenExpired => "TOKEN_EXPIRED",
// 验证
AppError::ValidationFailed { .. } => "VALIDATION_FAILED",
AppError::InvalidRequest { .. } => "INVALID_REQUEST",
AppError::MissingField { .. } => "MISSING_FIELD",
// 资源
AppError::NotFound(_) => "NOT_FOUND",
AppError::AlreadyExists(_) => "ALREADY_EXISTS",
AppError::Conflict { .. } => "CONFLICT",
// 业务逻辑
AppError::BusinessRuleViolation { .. } => "BUSINESS_RULE_VIOLATION",
AppError::OperationNotAllowed { .. } => "OPERATION_NOT_ALLOWED",
AppError::InvalidState { .. } => "INVALID_STATE",
// 数据库
AppError::DatabaseError(_) => "DATABASE_ERROR",
AppError::DatabaseConnectionFailed { .. } => "DATABASE_CONNECTION_FAILED",
AppError::TransactionFailed { .. } => "TRANSACTION_FAILED",
// 缓存
AppError::CacheError(_) => "CACHE_ERROR",
AppError::CacheConnectionFailed { .. } => "CACHE_CONNECTION_FAILED",
// 网络
AppError::NetworkError { .. } => "NETWORK_ERROR",
AppError::ExternalServiceError { .. } => "EXTERNAL_SERVICE_ERROR",
AppError::HttpRequestFailed { .. } => "HTTP_REQUEST_FAILED",
// 文件
AppError::FileOperationFailed { .. } => "FILE_OPERATION_FAILED",
AppError::FileNotFound { .. } => "FILE_NOT_FOUND",
AppError::FilePermissionDenied { .. } => "FILE_PERMISSION_DENIED",
// 配置
AppError::ConfigurationError { .. } => "CONFIGURATION_ERROR",
AppError::MissingEnvironmentVariable { .. } => "MISSING_ENV_VAR",
// 序列化
AppError::SerializationFailed { .. } => "SERIALIZATION_FAILED",
AppError::DeserializationFailed { .. } => "DESERIALIZATION_FAILED",
AppError::JsonFormatError { .. } => "JSON_FORMAT_ERROR",
// 流程引擎
AppError::FlowExecutionFailed { .. } => "FLOW_EXECUTION_FAILED",
AppError::FlowParsingFailed { .. } => "FLOW_PARSING_FAILED",
AppError::NodeExecutionFailed { .. } => "NODE_EXECUTION_FAILED",
// 调度任务
AppError::JobSchedulingFailed { .. } => "JOB_SCHEDULING_FAILED",
AppError::InvalidCronExpression { .. } => "INVALID_CRON_EXPRESSION",
AppError::JobExecutionTimeout { .. } => "JOB_EXECUTION_TIMEOUT",
// 系统
AppError::InternalServerError { .. } => "INTERNAL_SERVER_ERROR",
AppError::ServiceUnavailable { .. } => "SERVICE_UNAVAILABLE",
AppError::RequestTimeout { .. } => "REQUEST_TIMEOUT",
AppError::ResourceExhausted { .. } => "RESOURCE_EXHAUSTED",
// 限流和安全
AppError::RateLimitExceeded { .. } => "RATE_LIMIT_EXCEEDED",
AppError::PayloadTooLarge { .. } => "PAYLOAD_TOO_LARGE",
AppError::UnsupportedMediaType { .. } => "UNSUPPORTED_MEDIA_TYPE",
}
}
/// 获取 HTTP 状态码
pub fn status_code(&self) -> StatusCode {
match self {
// 4xx 客户端错误
AppError::AuthenticationFailed { .. } => StatusCode::UNAUTHORIZED,
AppError::AuthorizationFailed { .. } => StatusCode::FORBIDDEN,
AppError::InvalidToken { .. } => StatusCode::UNAUTHORIZED,
AppError::TokenExpired => StatusCode::UNAUTHORIZED,
AppError::ValidationFailed { .. } => StatusCode::BAD_REQUEST,
AppError::InvalidRequest { .. } => StatusCode::BAD_REQUEST,
AppError::MissingField { .. } => StatusCode::BAD_REQUEST,
AppError::NotFound(_) => StatusCode::NOT_FOUND,
AppError::AlreadyExists(_) => StatusCode::CONFLICT,
AppError::Conflict { .. } => StatusCode::CONFLICT,
AppError::BusinessRuleViolation { .. } => StatusCode::BAD_REQUEST,
AppError::OperationNotAllowed { .. } => StatusCode::FORBIDDEN,
AppError::InvalidState { .. } => StatusCode::BAD_REQUEST,
AppError::FileNotFound { .. } => StatusCode::NOT_FOUND,
AppError::FilePermissionDenied { .. } => StatusCode::FORBIDDEN,
AppError::JsonFormatError { .. } => StatusCode::BAD_REQUEST,
AppError::InvalidCronExpression { .. } => StatusCode::BAD_REQUEST,
AppError::RateLimitExceeded { .. } => StatusCode::TOO_MANY_REQUESTS,
AppError::PayloadTooLarge { .. } => StatusCode::PAYLOAD_TOO_LARGE,
AppError::UnsupportedMediaType { .. } => StatusCode::UNSUPPORTED_MEDIA_TYPE,
// 5xx 服务器错误
AppError::DatabaseError(_) => StatusCode::INTERNAL_SERVER_ERROR,
AppError::DatabaseConnectionFailed { .. } => StatusCode::SERVICE_UNAVAILABLE,
AppError::TransactionFailed { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::CacheError(_) => StatusCode::INTERNAL_SERVER_ERROR,
AppError::CacheConnectionFailed { .. } => StatusCode::SERVICE_UNAVAILABLE,
AppError::NetworkError { .. } => StatusCode::BAD_GATEWAY,
AppError::ExternalServiceError { .. } => StatusCode::BAD_GATEWAY,
AppError::HttpRequestFailed { .. } => StatusCode::BAD_GATEWAY,
AppError::FileOperationFailed { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::ConfigurationError { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::MissingEnvironmentVariable { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::SerializationFailed { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::DeserializationFailed { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::FlowExecutionFailed { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::FlowParsingFailed { .. } => StatusCode::BAD_REQUEST,
AppError::NodeExecutionFailed { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::JobSchedulingFailed { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::JobExecutionTimeout { .. } => StatusCode::REQUEST_TIMEOUT,
AppError::InternalServerError { .. } => StatusCode::INTERNAL_SERVER_ERROR,
AppError::ServiceUnavailable { .. } => StatusCode::SERVICE_UNAVAILABLE,
AppError::RequestTimeout { .. } => StatusCode::REQUEST_TIMEOUT,
AppError::ResourceExhausted { .. } => StatusCode::SERVICE_UNAVAILABLE,
}
}
/// 获取错误字段(如果适用)
pub fn error_field(&self) -> Option<String> {
match self {
AppError::ValidationFailed { field, .. } => Some(field.clone()),
AppError::MissingField { field } => Some(field.clone()),
_ => None,
}
}
/// 是否为客户端错误
pub fn is_client_error(&self) -> bool {
self.status_code().is_client_error()
}
/// 是否为服务器错误
pub fn is_server_error(&self) -> bool {
self.status_code().is_server_error()
}
/// 创建错误响应
pub fn to_error_response(&self, request_id: Option<String>) -> ErrorResponse {
ErrorResponse {
error: ErrorDetail {
code: self.error_code().to_string(),
message: self.to_string(),
details: None,
field: self.error_field(),
},
timestamp: chrono::Utc::now(),
request_id,
}
}
/// 记录错误日志
pub fn log_error(&self, request_id: Option<&str>) {
let level = if self.is_server_error() {
tracing::Level::ERROR
} else {
tracing::Level::WARN
};
match level {
tracing::Level::ERROR => {
error!(
target = "udmin",
error_code = %self.error_code(),
error_message = %self,
request_id = ?request_id,
"application.error.server"
);
}
_ => {
tracing::warn!(
target = "udmin",
error_code = %self.error_code(),
error_message = %self,
request_id = ?request_id,
"application.error.client"
);
}
}
}
}
/// 实现 IntoResponse使错误可以直接作为 HTTP 响应返回
impl IntoResponse for AppError {
fn into_response(self) -> Response {
// 从请求上下文获取 request_id实际实现中可能需要通过中间件传递
let request_id = None; // 这里应该从上下文获取
// 记录错误日志
self.log_error(request_id.as_deref());
// 创建错误响应
let error_response = self.to_error_response(request_id);
let status_code = self.status_code();
(status_code, Json(error_response)).into_response()
}
}
/// 应用结果类型别名
pub type AppResult<T> = Result<T, AppError>;
/// 错误转换实现
impl From<sea_orm::DbErr> for AppError {
fn from(err: sea_orm::DbErr) -> Self {
match err {
sea_orm::DbErr::RecordNotFound(_) => AppError::NotFound("记录不存在".to_string()),
sea_orm::DbErr::Custom(msg) => AppError::DatabaseError(msg),
sea_orm::DbErr::Conn(msg) => AppError::DatabaseConnectionFailed { message: msg },
sea_orm::DbErr::Exec(msg) => AppError::DatabaseError(msg),
sea_orm::DbErr::Query(msg) => AppError::DatabaseError(msg),
_ => AppError::DatabaseError("未知数据库错误".to_string()),
}
}
}
impl From<redis::RedisError> for AppError {
fn from(err: redis::RedisError) -> Self {
match err.kind() {
redis::ErrorKind::IoError => AppError::CacheConnectionFailed {
message: err.to_string(),
},
redis::ErrorKind::AuthenticationFailed => AppError::CacheConnectionFailed {
message: "Redis 认证失败".to_string(),
},
_ => AppError::CacheError(err.to_string()),
}
}
}
impl From<reqwest::Error> for AppError {
fn from(err: reqwest::Error) -> Self {
if err.is_timeout() {
AppError::RequestTimeout {
message: "HTTP 请求超时".to_string(),
}
} else if err.is_connect() {
AppError::NetworkError {
message: "网络连接失败".to_string(),
}
} else if let Some(status) = err.status() {
AppError::HttpRequestFailed {
status: status.as_u16(),
message: err.to_string(),
}
} else {
AppError::NetworkError {
message: err.to_string(),
}
}
}
}
impl From<serde_json::Error> for AppError {
fn from(err: serde_json::Error) -> Self {
if err.is_syntax() {
AppError::JsonFormatError {
message: "JSON 语法错误".to_string(),
}
} else if err.is_data() {
AppError::DeserializationFailed {
message: err.to_string(),
}
} else {
AppError::SerializationFailed {
message: err.to_string(),
}
}
}
}
impl From<std::io::Error> for AppError {
fn from(err: std::io::Error) -> Self {
match err.kind() {
std::io::ErrorKind::NotFound => AppError::FileNotFound {
path: "未知路径".to_string(),
},
std::io::ErrorKind::PermissionDenied => AppError::FilePermissionDenied {
path: "未知路径".to_string(),
},
_ => AppError::FileOperationFailed {
message: err.to_string(),
},
}
}
}
impl From<tokio::time::error::Elapsed> for AppError {
fn from(_: tokio::time::error::Elapsed) -> Self {
AppError::RequestTimeout {
message: "操作超时".to_string(),
}
}
}
/// 错误构建器
pub struct ErrorBuilder {
error: AppError,
}
impl ErrorBuilder {
pub fn new(error: AppError) -> Self {
Self { error }
}
pub fn with_details(mut self, details: serde_json::Value) -> Self {
// 这里可以扩展错误以包含更多详情
self
}
pub fn with_field(mut self, field: String) -> Self {
// 设置错误字段
self
}
pub fn build(self) -> AppError {
self.error
}
}
/// 错误宏
#[macro_export]
macro_rules! app_error {
($error_type:ident, $($field:ident = $value:expr),*) => {
AppError::$error_type {
$($field: $value.into()),*
}
};
}
/// 结果扩展 trait
pub trait ResultExt<T> {
/// 将错误转换为 AppError
fn map_app_error<F>(self, f: F) -> AppResult<T>
where
F: FnOnce() -> AppError;
/// 添加错误上下文
fn with_context<F>(self, f: F) -> AppResult<T>
where
F: FnOnce() -> String;
}
impl<T, E> ResultExt<T> for Result<T, E>
where
E: Into<AppError>,
{
fn map_app_error<F>(self, f: F) -> AppResult<T>
where
F: FnOnce() -> AppError,
{
self.map_err(|_| f())
}
fn with_context<F>(self, f: F) -> AppResult<T>
where
F: FnOnce() -> String,
{
self.map_err(|e| {
let context = f();
match e.into() {
AppError::InternalServerError { message } => AppError::InternalServerError {
message: format!("{}: {}", context, message),
},
other => other,
}
})
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_error_codes() {
let error = AppError::NotFound("用户".to_string());
assert_eq!(error.error_code(), "NOT_FOUND");
assert_eq!(error.status_code(), StatusCode::NOT_FOUND);
}
#[test]
fn test_error_response() {
let error = AppError::ValidationFailed {
field: "email".to_string(),
message: "格式无效".to_string(),
};
let response = error.to_error_response(Some("req-123".to_string()));
assert_eq!(response.error.code, "VALIDATION_FAILED");
assert_eq!(response.error.field, Some("email".to_string()));
assert_eq!(response.request_id, Some("req-123".to_string()));
}
#[test]
fn test_error_conversion() {
let db_error = sea_orm::DbErr::RecordNotFound("test".to_string());
let app_error: AppError = db_error.into();
match app_error {
AppError::NotFound(_) => assert!(true),
_ => assert!(false, "Expected NotFound error"),
}
}
#[test]
fn test_error_macro() {
let error = app_error!(ValidationFailed,
field = "username",
message = "用户名已存在"
);
match error {
AppError::ValidationFailed { field, message } => {
assert_eq!(field, "username");
assert_eq!(message, "用户名已存在");
}
_ => assert!(false, "Expected ValidationFailed error"),
}
}
#[test]
fn test_result_ext() {
let result: Result<i32, &str> = Err("test error");
let app_result = result.map_app_error(|| AppError::InternalServerError {
message: "转换错误".to_string(),
});
assert!(app_result.is_err());
}
}
```
## 错误处理策略
### 错误传播
```rust
/// 错误传播示例
pub async fn create_user(req: CreateUserReq) -> AppResult<UserDoc> {
// 验证请求
validate_create_user_request(&req)?;
// 检查用户是否已存在
if user_exists(&req.email).await? {
return Err(AppError::AlreadyExists("用户邮箱已存在".to_string()));
}
// 创建用户
let user = User::create(req).await
.with_context(|| "创建用户失败")?;
Ok(user.into())
}
```
### 错误恢复
```rust
/// 错误恢复示例
pub async fn get_user_with_fallback(id: &str) -> AppResult<UserDoc> {
// 首先尝试从缓存获取
match get_user_from_cache(id).await {
Ok(user) => return Ok(user),
Err(AppError::CacheError(_)) => {
// 缓存错误,尝试从数据库获取
tracing::warn!("缓存获取用户失败,尝试数据库");
}
Err(e) => return Err(e),
}
// 从数据库获取
let user = get_user_from_db(id).await?;
// 尝试更新缓存(忽略错误)
if let Err(e) = set_user_cache(id, &user).await {
tracing::warn!(error = %e, "更新用户缓存失败");
}
Ok(user)
}
```
### 错误聚合
```rust
/// 错误聚合示例
pub struct ValidationErrors {
pub errors: Vec<AppError>,
}
impl ValidationErrors {
pub fn new() -> Self {
Self { errors: Vec::new() }
}
pub fn add(&mut self, error: AppError) {
self.errors.push(error);
}
pub fn is_empty(&self) -> bool {
self.errors.is_empty()
}
pub fn into_result(self) -> AppResult<()> {
if self.errors.is_empty() {
Ok(())
} else {
// 返回第一个错误,或者可以创建一个聚合错误类型
Err(self.errors.into_iter().next().unwrap())
}
}
}
pub fn validate_user_data(data: &CreateUserReq) -> AppResult<()> {
let mut errors = ValidationErrors::new();
if data.email.is_empty() {
errors.add(AppError::MissingField { field: "email".to_string() });
}
if data.password.len() < 8 {
errors.add(AppError::ValidationFailed {
field: "password".to_string(),
message: "密码长度至少8位".to_string(),
});
}
errors.into_result()
}
```
## 中间件集成
### 错误处理中间件
```rust
use axum::{
extract::Request,
middleware::Next,
response::Response,
};
use uuid::Uuid;
/// 错误处理中间件
pub async fn error_handler_middleware(
mut request: Request,
next: Next,
) -> Response {
// 生成请求 ID
let request_id = Uuid::new_v4().to_string();
request.extensions_mut().insert(request_id.clone());
// 执行请求
let response = next.run(request).await;
// 如果响应是错误,添加请求 ID
if response.status().is_client_error() || response.status().is_server_error() {
// 这里可以修改响应头,添加请求 ID
tracing::info!(
target = "udmin",
request_id = %request_id,
status = %response.status(),
"request.completed.error"
);
}
response
}
```
## 监控和告警
### 错误指标收集
```rust
use prometheus::{Counter, Histogram, Registry};
use std::sync::Arc;
/// 错误指标收集器
#[derive(Clone)]
pub struct ErrorMetrics {
error_counter: Counter,
error_duration: Histogram,
}
impl ErrorMetrics {
pub fn new(registry: &Registry) -> Self {
let error_counter = Counter::new(
"app_errors_total",
"Total number of application errors"
).unwrap();
let error_duration = Histogram::new(
"error_handling_duration_seconds",
"Time spent handling errors"
).unwrap();
registry.register(Box::new(error_counter.clone())).unwrap();
registry.register(Box::new(error_duration.clone())).unwrap();
Self {
error_counter,
error_duration,
}
}
pub fn record_error(&self, error: &AppError) {
self.error_counter.inc();
// 可以根据错误类型添加标签
tracing::info!(
target = "udmin",
error_code = %error.error_code(),
error_type = %std::any::type_name::<AppError>(),
"metrics.error.recorded"
);
}
}
```
## 最佳实践
### 错误设计原则
1. **明确性**: 错误消息应该清楚地说明发生了什么
2. **可操作性**: 错误消息应该告诉用户如何解决问题
3. **一致性**: 相同类型的错误应该有一致的格式和处理方式
4. **安全性**: 不要在错误消息中泄露敏感信息
### 错误处理模式
1. **快速失败**: 尽早检测和报告错误
2. **优雅降级**: 在可能的情况下提供备选方案
3. **错误隔离**: 防止错误在系统中传播
4. **错误恢复**: 在适当的时候尝试从错误中恢复
### 日志记录
1. **结构化日志**: 使用结构化格式记录错误信息
2. **上下文信息**: 包含足够的上下文信息用于调试
3. **敏感信息**: 避免在日志中记录敏感信息
4. **日志级别**: 根据错误严重程度选择合适的日志级别
## 总结
UdminAI 的错误处理模块提供了完整的错误管理解决方案,具有以下特点:
- **类型安全**: 编译时错误类型检查
- **统一处理**: 所有错误使用统一的类型和格式
- **用户友好**: 清晰的错误消息和适当的 HTTP 状态码
- **可观测性**: 完整的错误日志和指标收集
- **可扩展性**: 易于添加新的错误类型和处理逻辑
通过合理的错误处理设计,确保了系统的稳定性、可维护性和用户体验。

484
docs/FLOW_ENGINE.md Normal file
View File

@ -0,0 +1,484 @@
# 流程引擎文档
## 概述
流程引擎是 UdminAI 的核心模块,提供可视化流程设计、执行和监控功能。支持多种节点类型、条件分支、循环控制和并发执行。
## 架构设计
### 核心组件
```
flow/
├── domain.rs # 领域模型定义
├── dsl.rs # DSL 解析和构建
├── engine.rs # 流程执行引擎
├── context.rs # 执行上下文管理
├── task.rs # 任务抽象接口
├── log_handler.rs # 日志处理
├── mappers.rs # 数据映射器
├── executors/ # 执行器实现
└── mappers/ # 具体映射器
```
## 领域模型 (domain.rs)
### 核心数据结构
#### ChainDef - 流程链定义
```rust
pub struct ChainDef {
pub nodes: Vec<NodeDef>, // 节点列表
pub links: Vec<LinkDef>, // 连接列表
}
```
#### NodeDef - 节点定义
```rust
pub struct NodeDef {
pub id: NodeId, // 节点唯一标识
pub kind: NodeKind, // 节点类型
pub data: serde_json::Value, // 节点配置数据
}
```
#### NodeKind - 节点类型
```rust
pub enum NodeKind {
Start, // 开始节点
End, // 结束节点
Condition, // 条件节点
Http, // HTTP 请求节点
Database, // 数据库操作节点
ScriptJs, // JavaScript 脚本节点
ScriptPython, // Python 脚本节点
ScriptRhai, // Rhai 脚本节点
Variable, // 变量操作节点
Task, // 通用任务节点
}
```
#### LinkDef - 连接定义
```rust
pub struct LinkDef {
pub from: NodeId, // 源节点
pub to: NodeId, // 目标节点
pub condition: Option<String>, // 连接条件
}
```
## DSL 解析 (dsl.rs)
### DSL 结构
#### FlowDSL - 流程 DSL
```rust
pub struct FlowDSL {
pub nodes: Vec<NodeDSL>, // 节点列表
pub edges: Vec<EdgeDSL>, // 边列表
}
```
#### DesignSyntax - 设计语法
```rust
pub struct DesignSyntax {
pub nodes: Vec<NodeSyntax>, // 节点语法
pub edges: Vec<EdgeSyntax>, // 边语法
}
```
### 核心功能
#### 1. 设计验证
```rust
pub fn validate_design(design: &DesignSyntax) -> anyhow::Result<()>
```
**验证规则**:
- 节点 ID 唯一性
- 至少包含一个 start 节点
- 至少包含一个 end 节点
- 边的引用合法性
- 条件节点配置完整性
#### 2. 链构建
```rust
pub fn build_chain_from_design(design: &DesignSyntax) -> anyhow::Result<ChainDef>
```
**构建过程**:
1. 节点类型推断
2. 条件节点处理
3. 边关系建立
4. 数据完整性检查
#### 3. 兼容性处理
```rust
pub fn chain_from_design_json(input: &str) -> anyhow::Result<ChainDef>
```
**兼容特性**:
- 字符串/对象输入支持
- 字段回填
- 版本兼容
- 错误恢复
## 执行引擎 (engine.rs)
### FlowEngine - 流程执行引擎
#### 核心结构
```rust
pub struct FlowEngine {
tasks: TaskRegistry, // 任务注册表
}
```
#### 执行选项
```rust
pub struct DriveOptions {
pub max_steps: Option<usize>, // 最大执行步数
pub timeout_ms: Option<u64>, // 超时时间
pub parallel: bool, // 并发执行
pub stream_events: bool, // 流事件支持
}
```
### 执行流程
#### 1. 起点选择
- 优先选择 Start 节点
- 其次选择入度为 0 的节点
- 最后选择第一个节点
#### 2. 执行策略
- **串行执行**: 按依赖顺序逐个执行
- **并发执行**: 无依赖节点并行执行
- **条件分支**: 根据条件选择执行路径
- **循环控制**: 支持循环节点执行
#### 3. 状态管理
- 节点执行状态跟踪
- 上下文数据传递
- 错误状态处理
- 执行结果收集
### 任务注册表 (TaskRegistry)
#### 注册机制
```rust
pub struct TaskRegistry {
executors: HashMap<NodeKind, Box<dyn Executor>>,
}
```
#### 执行器接口
```rust
#[async_trait]
pub trait Executor: Send + Sync {
async fn execute(
&self,
node_id: &NodeId,
node: &NodeDef,
ctx: &mut serde_json::Value,
) -> anyhow::Result<()>;
}
```
## 执行器实现 (executors/)
### HTTP 执行器 (http.rs)
**功能**: 执行 HTTP 请求
**配置参数**:
- `method`: 请求方法 (GET/POST/PUT/DELETE)
- `url`: 请求 URL
- `headers`: 请求头
- `query`: 查询参数
- `body`: 请求体
- `timeout_ms`: 超时时间
- `insecure`: 忽略 SSL 验证
**执行流程**:
1. 解析 HTTP 配置
2. 构建请求参数
3. 发送 HTTP 请求
4. 处理响应结果
5. 更新执行上下文
### 数据库执行器 (db.rs)
**功能**: 执行数据库操作
**支持操作**:
- `SELECT`: 查询数据
- `INSERT`: 插入数据
- `UPDATE`: 更新数据
- `DELETE`: 删除数据
- `TRANSACTION`: 事务操作
**配置参数**:
- `sql`: SQL 语句
- `params`: 参数绑定
- `connection`: 连接配置
- `transaction`: 事务控制
### 条件执行器 (condition.rs)
**功能**: 条件判断和分支控制
**条件类型**:
- 简单比较 (==, !=, >, <, >=, <=)
- 逻辑运算 (AND, OR, NOT)
- 正则匹配
- 自定义表达式
**执行逻辑**:
1. 解析条件表达式
2. 从上下文获取变量值
3. 执行条件计算
4. 返回布尔结果
### 脚本执行器
#### JavaScript 执行器 (script_js.rs)
**功能**: 执行 JavaScript 代码
**引擎**: V8 引擎 (通过 rusty_v8)
#### Python 执行器 (script_python.rs)
**功能**: 执行 Python 代码
**引擎**: Python 解释器 (通过 pyo3)
#### Rhai 执行器 (script_rhai.rs)
**功能**: 执行 Rhai 脚本
**引擎**: Rhai 脚本引擎
**通用特性**:
- 沙箱执行环境
- 上下文变量注入
- 执行结果获取
- 错误处理和日志
### 变量执行器 (variable.rs)
**功能**: 变量操作和数据转换
**操作类型**:
- `SET`: 设置变量值
- `GET`: 获取变量值
- `TRANSFORM`: 数据转换
- `MERGE`: 数据合并
- `EXTRACT`: 数据提取
## 上下文管理 (context.rs)
### StreamEvent - 流事件
```rust
pub enum StreamEvent {
NodeStart { node_id: String }, // 节点开始
NodeComplete { node_id: String }, // 节点完成
NodeError { node_id: String, error: String }, // 节点错误
FlowComplete, // 流程完成
FlowError { error: String }, // 流程错误
}
```
### 上下文结构
```rust
pub struct ExecutionContext {
pub variables: serde_json::Value, // 变量存储
pub node_results: HashMap<String, serde_json::Value>, // 节点结果
pub execution_log: Vec<LogEntry>, // 执行日志
pub start_time: DateTime<Utc>, // 开始时间
}
```
### 上下文操作
- **变量管理**: 设置、获取、更新变量
- **结果存储**: 保存节点执行结果
- **日志记录**: 记录执行过程
- **状态跟踪**: 跟踪执行状态
## 数据映射器 (mappers/)
### HTTP 映射器 (http.rs)
**功能**: HTTP 请求/响应数据映射
### 数据库映射器 (db.rs)
**功能**: 数据库查询结果映射
### 脚本映射器 (script.rs)
**功能**: 脚本执行结果映射
### 变量映射器 (variable.rs)
**功能**: 变量数据类型映射
## 日志处理 (log_handler.rs)
### 日志类型
```rust
pub enum LogLevel {
Debug,
Info,
Warn,
Error,
}
pub struct LogEntry {
pub level: LogLevel,
pub message: String,
pub timestamp: DateTime<Utc>,
pub node_id: Option<String>,
pub context: serde_json::Value,
}
```
### 日志功能
- **执行日志**: 记录节点执行过程
- **错误日志**: 记录执行错误信息
- **性能日志**: 记录执行时间和资源使用
- **调试日志**: 记录调试信息
## 流程执行模式
### 1. 同步执行
- 阻塞式执行
- 顺序执行节点
- 立即返回结果
- 适用于简单流程
### 2. 异步执行
- 非阻塞执行
- 后台执行流程
- 通过回调获取结果
- 适用于长时间运行的流程
### 3. 流式执行
- 实时事件推送
- 执行过程可视化
- 支持中断和恢复
- 适用于交互式流程
### 4. 批量执行
- 批量处理多个流程
- 资源优化
- 并发控制
- 适用于批处理场景
## 错误处理
### 错误类型
```rust
pub enum FlowError {
ParseError(String), // 解析错误
ValidationError(String), // 验证错误
ExecutionError(String), // 执行错误
TimeoutError, // 超时错误
ResourceError(String), // 资源错误
}
```
### 错误处理策略
- **重试机制**: 自动重试失败的节点
- **降级处理**: 执行备用逻辑
- **错误传播**: 将错误传播到上层
- **日志记录**: 详细记录错误信息
## 性能优化
### 执行优化
- **并发执行**: 无依赖节点并行执行
- **资源池**: 复用执行器实例
- **缓存机制**: 缓存执行结果
- **懒加载**: 按需加载执行器
### 内存优化
- **上下文清理**: 及时清理不需要的数据
- **流式处理**: 大数据流式处理
- **对象池**: 复用对象实例
- **垃圾回收**: 主动触发垃圾回收
### 网络优化
- **连接复用**: HTTP 连接复用
- **请求合并**: 合并相似请求
- **超时控制**: 合理设置超时时间
- **重试策略**: 智能重试机制
## 监控和调试
### 执行监控
- **执行状态**: 实时监控执行状态
- **性能指标**: 收集执行性能数据
- **资源使用**: 监控内存和 CPU 使用
- **错误统计**: 统计错误发生情况
### 调试支持
- **断点调试**: 支持节点断点
- **单步执行**: 逐步执行节点
- **变量查看**: 查看执行上下文
- **日志输出**: 详细的执行日志
## 扩展机制
### 自定义执行器
```rust
#[derive(Default)]
pub struct CustomExecutor;
#[async_trait]
impl Executor for CustomExecutor {
async fn execute(
&self,
node_id: &NodeId,
node: &NodeDef,
ctx: &mut serde_json::Value,
) -> anyhow::Result<()> {
// 自定义执行逻辑
Ok(())
}
}
```
### 插件系统
- **执行器插件**: 扩展新的节点类型
- **中间件插件**: 扩展执行过程
- **映射器插件**: 扩展数据映射
- **日志插件**: 扩展日志处理
## 最佳实践
### 流程设计
- **模块化设计**: 将复杂流程拆分为子流程
- **错误处理**: 为关键节点添加错误处理
- **性能考虑**: 避免不必要的数据传递
- **可维护性**: 添加适当的注释和文档
### 节点配置
- **参数验证**: 验证节点配置参数
- **默认值**: 为可选参数提供默认值
- **类型安全**: 使用强类型配置
- **版本兼容**: 保持配置向后兼容
### 执行优化
- **并发控制**: 合理设置并发度
- **资源限制**: 设置合理的资源限制
- **超时设置**: 为长时间运行的节点设置超时
- **监控告警**: 添加关键指标监控

View File

@ -0,0 +1,439 @@
# 前端架构文档
## 概述
前端采用 React 18 + TypeScript 构建,使用现代化的组件化架构,集成了强大的流程可视化编辑器。
## 技术栈
### 核心框架
- **React 18**: 前端框架,支持并发特性
- **TypeScript**: 类型安全的 JavaScript 超集
- **Vite**: 现代化构建工具
### UI 组件库
- **Semi Design**: 主要 UI 组件库
- **Ant Design**: 补充 UI 组件
- **Styled Components**: CSS-in-JS 样式解决方案
### 流程编辑器
- **@flowgram.ai/free-layout-editor**: 自由布局编辑器核心
- **@flowgram.ai/form-materials**: 表单物料组件
- **@flowgram.ai/runtime-js**: 流程运行时
- **@flowgram.ai/minimap-plugin**: 小地图插件
- **@flowgram.ai/panel-manager-plugin**: 面板管理插件
### 状态管理和路由
- **React Context**: 状态管理
- **React Router v6**: 客户端路由
- **Axios**: HTTP 客户端
## 项目结构
```
frontend/src/
├── App.tsx # 应用根组件
├── main.tsx # 应用入口
├── vite-env.d.ts # Vite 类型声明
├── assets/ # 静态资源
├── components/ # 通用组件
├── flows/ # 流程编辑器模块
├── layouts/ # 布局组件
├── pages/ # 页面组件
├── styles/ # 全局样式
└── utils/ # 工具函数
```
## 核心模块
### 1. 应用入口 (main.tsx)
**职责**: 应用初始化和根组件渲染
**功能**:
- React 18 严格模式启用
- 路由配置
- 全局样式导入
- 错误边界设置
### 2. 应用根组件 (App.tsx)
**职责**: 应用路由配置和布局管理
**功能**:
- 路由定义和保护
- 认证状态管理
- 全局错误处理
- 主题配置
### 3. 布局系统 (layouts/)
#### MainLayout.tsx
**职责**: 主要页面布局
**功能**:
- 侧边栏导航
- 顶部导航栏
- 面包屑导航
- 用户信息显示
- 响应式布局
**布局结构**:
```tsx
<Layout>
<Sider>侧边栏</Sider>
<Layout>
<Header>顶部导航</Header>
<Content>页面内容</Content>
<Footer>页脚</Footer>
</Layout>
</Layout>
```
### 4. 页面组件 (pages/)
#### 管理页面
- `Dashboard.tsx`: 仪表板页面
- `Users.tsx`: 用户管理页面
- `Roles.tsx`: 角色管理页面
- `Menus.tsx`: 菜单管理页面
- `Departments.tsx`: 部门管理页面
- `Positions.tsx`: 职位管理页面
- `Permissions.tsx`: 权限管理页面
#### 流程相关页面
- `FlowList.tsx`: 流程列表页面
- `FlowRunLogs.tsx`: 流程运行日志页面
- `ScheduleJobs.tsx`: 定时任务页面
#### 系统页面
- `Login.tsx`: 登录页面
- `Logs.tsx`: 系统日志页面
**页面特性**:
- 统一的 CRUD 操作
- 表格分页和搜索
- 表单验证
- 权限控制
- 响应式设计
### 5. 通用组件 (components/)
#### PageHeader.tsx
**职责**: 页面头部组件
**功能**:
- 页面标题显示
- 面包屑导航
- 操作按钮区域
- 统一样式
### 6. 工具函数 (utils/)
#### axios.ts
**职责**: HTTP 客户端配置
**功能**:
- 请求/响应拦截器
- 自动 Token 添加
- 错误统一处理
- 请求重试机制
#### token.ts
**职责**: 令牌管理
**功能**:
- Token 存储和获取
- Token 过期检查
- 自动刷新机制
- 登出清理
#### permission.tsx
**职责**: 权限控制
**功能**:
- 权限检查组件
- 路由权限保护
- 按钮级权限控制
- 角色权限验证
#### sse.ts
**职责**: 服务端推送事件
**功能**:
- SSE 连接管理
- 事件监听
- 自动重连
- 错误处理
#### datetime.ts
**职责**: 日期时间处理
**功能**:
- 日期格式化
- 时区转换
- 相对时间显示
- 日期计算
#### config.ts
**职责**: 应用配置
**功能**:
- 环境变量管理
- API 端点配置
- 应用常量定义
- 功能开关
## 流程编辑器模块
**位置**: `src/flows/`
### 核心组件
#### 1. 编辑器入口 (editor.tsx)
**职责**: 流程编辑器主组件
**功能**:
- 编辑器初始化
- 插件注册
- 事件处理
- 数据同步
#### 2. 应用容器 (app.tsx)
**职责**: 编辑器应用容器
**功能**:
- 编辑器配置
- 工具栏管理
- 侧边栏控制
- 快捷键支持
#### 3. 初始数据 (initial-data.ts)
**职责**: 编辑器初始化数据
**功能**:
- 默认节点配置
- 画布初始状态
- 工具栏配置
- 插件配置
### 节点系统 (nodes/)
#### 节点类型
- `start/`: 开始节点
- `end/`: 结束节点
- `condition/`: 条件节点
- `http/`: HTTP 请求节点
- `db/`: 数据库操作节点
- `code/`: 代码执行节点
- `variable/`: 变量操作节点
- `loop/`: 循环节点
- `comment/`: 注释节点
- `group/`: 分组节点
#### 节点特性
- 可视化配置界面
- 参数验证
- 实时预览
- 拖拽支持
- 连接点管理
### 组件系统 (components/)
#### 核心组件
- `base-node/`: 基础节点组件
- `node-panel/`: 节点配置面板
- `sidebar/`: 侧边栏组件
- `tools/`: 工具栏组件
- `testrun/`: 测试运行组件
#### 交互组件
- `add-node/`: 添加节点组件
- `node-menu/`: 节点菜单
- `line-add-button/`: 连线添加按钮
- `selector-box-popover/`: 选择框弹窗
### 表单系统 (form-components/)
#### 表单组件
- `form-header/`: 表单头部
- `form-content/`: 表单内容
- `form-inputs/`: 表单输入组件
- `form-item/`: 表单项组件
- `feedback.tsx`: 反馈组件
### 插件系统 (plugins/)
#### 插件列表
- `context-menu-plugin/`: 右键菜单插件
- `runtime-plugin/`: 运行时插件
- `variable-panel-plugin/`: 变量面板插件
### 快捷键系统 (shortcuts/)
#### 快捷键功能
- `copy/`: 复制功能
- `paste/`: 粘贴功能
- `delete/`: 删除功能
- `select-all/`: 全选功能
- `zoom-in/`: 放大功能
- `zoom-out/`: 缩小功能
- `collapse/`: 折叠功能
- `expand/`: 展开功能
### 上下文管理 (context/)
#### 上下文类型
- `node-render-context.ts`: 节点渲染上下文
- `sidebar-context.ts`: 侧边栏上下文
### Hooks 系统 (hooks/)
#### 自定义 Hooks
- `use-editor-props.tsx`: 编辑器属性 Hook
- `use-is-sidebar.ts`: 侧边栏状态 Hook
- `use-node-render-context.ts`: 节点渲染上下文 Hook
- `use-port-click.ts`: 端口点击 Hook
### 工具函数 (utils/)
#### 工具函数
- `yaml.ts`: YAML 处理工具
- `on-drag-line-end.ts`: 拖拽连线结束处理
- `toggle-loop-expanded.ts`: 循环节点展开切换
## 状态管理
### Context 设计
#### 全局状态
- 用户认证状态
- 权限信息
- 主题配置
- 语言设置
#### 页面状态
- 表格数据
- 分页信息
- 搜索条件
- 选中项
#### 编辑器状态
- 画布数据
- 选中节点
- 编辑模式
- 工具栏状态
### 状态更新模式
- 不可变更新
- 批量更新
- 异步状态处理
- 错误状态管理
## 路由设计
### 路由结构
```
/
├── /login # 登录页面
├── /dashboard # 仪表板
├── /users # 用户管理
├── /roles # 角色管理
├── /menus # 菜单管理
├── /departments # 部门管理
├── /positions # 职位管理
├── /permissions # 权限管理
├── /flows # 流程列表
├── /flows/:id/edit # 流程编辑
├── /flows/logs # 流程日志
├── /schedule-jobs # 定时任务
└── /logs # 系统日志
```
### 路由保护
- 认证检查
- 权限验证
- 角色控制
- 重定向处理
## 样式系统
### CSS 架构
- 全局样式 (`global.css`)
- 组件样式 (CSS Modules)
- 主题变量
- 响应式断点
### 设计系统
- 颜色规范
- 字体规范
- 间距规范
- 组件规范
## 性能优化
### 代码分割
- 路由级别分割
- 组件懒加载
- 动态导入
- Bundle 分析
### 渲染优化
- React.memo 使用
- useMemo 缓存
- useCallback 优化
- 虚拟滚动
### 资源优化
- 图片懒加载
- 资源压缩
- CDN 加速
- 缓存策略
## 测试策略
### 测试类型
- 单元测试
- 集成测试
- E2E 测试
- 视觉回归测试
### 测试工具
- Jest: 单元测试框架
- React Testing Library: 组件测试
- Cypress: E2E 测试
- Storybook: 组件文档
## 构建和部署
### 构建配置
- Vite 配置优化
- 环境变量管理
- 代码分割策略
- 资源优化
### 部署策略
- 静态资源部署
- CDN 配置
- 缓存策略
- 版本管理
## 开发规范
### 代码规范
- ESLint 配置
- Prettier 格式化
- TypeScript 严格模式
- 提交规范
### 组件规范
- 组件命名
- Props 定义
- 事件处理
- 样式组织
### 文件组织
- 目录结构
- 文件命名
- 导入导出
- 类型定义

View File

@ -0,0 +1,151 @@
# ID生成器时间顺序分析报告
## 概述
本报告分析了 `udmin` 项目中基于 Snowflake 算法的ID生成器验证其是否能够保证**后生成的ID永远比前面生成的ID大**。
## ID生成器架构
### 1. 基础实现
- **算法**: 基于 Snowflake 分布式ID生成算法
- **库**: 使用 `rs-snowflake` crate
- **线程安全**: 通过 `Mutex<SnowflakeIdGenerator>` 保证并发安全
- **全局单例**: 使用 `once_cell::sync::Lazy` 实现按需初始化
### 2. ID结构设计
```
|-- 16 bits --|-- 8 bits --|------ 39 bits ------|
| main_id | sub_id | snowflake_bits |
| 业务主类型 | 业务子类型 | 时间戳+序列号 |
```
- **总长度**: 63位 (最高位为0保证为正数)
- **main_id**: 16位业务主类型标识
- **sub_id**: 8位业务子类型标识
- **snowflake_bits**: 39位包含时间戳和序列号
### 3. 业务ID类型
| 业务类型 | main_id | sub_id | 用途 |
|---------|---------|--------|------|
| 通用ID | 1 | 1 | 流程、任务等通用场景 |
| 流程运行日志 | 2 | 1 | 流程执行日志记录 |
| 请求日志 | 3 | 1 | HTTP请求日志记录 |
## 测试验证结果
### 测试1: 连续生成ID递增性 ✅
```
ID 1: 141817072979969
ID 2: 141817072979970
ID 3: 141817072979971
...
ID 10: 141817072979978
```
**结论**: 连续生成的ID严格递增每次递增1。
### 测试2: 时间间隔ID递增性 ✅
```
时间间隔ID 1: 141817072979979
时间间隔ID 2: 141817509187584 (+436,207,605)
时间间隔ID 3: 141817949589504 (+440,401,920)
时间间隔ID 4: 141818389991424 (+440,401,920)
时间间隔ID 5: 141818822004736 (+432,013,312)
```
**结论**: 间隔100ms生成的ID显著递增体现了时间戳的影响。
### 测试3: 不同业务类型ID递增性 ✅
```
Flow ID 1: 141819258212352 (main_id=1, sub_id=1)
Flow ID 2: 141819262406656 (main_id=1, sub_id=1)
Log ID 1: 282556754956288 (main_id=2, sub_id=1)
Log ID 2: 282556763344896 (main_id=2, sub_id=1)
```
**结论**:
- 同类型业务ID严格递增
- 不同业务类型的ID由于高位不同数值差异显著
### 测试4: 多线程并发唯一性 ✅
- **线程数**: 5个并发线程
- **每线程生成**: 10个ID
- **总ID数**: 50个
- **唯一性**: 100% (无重复ID)
**结论**: 并发环境下所有ID都是唯一的证明线程安全机制有效。
### 测试5: 时间戳部分验证 ✅
```
ID1: 141819325321216, 时间戳部分: 532081152000
ID2: 141819379847168, 时间戳部分: 532135677952
```
**结论**: 后生成ID的时间戳部分大于前生成ID体现了时间递增特性。
## 时间顺序保证机制
### 1. Snowflake算法保证
- **时间戳**: 毫秒级时间戳占主要位数确保不同时间生成的ID递增
- **序列号**: 同一毫秒内的序列号递增确保同时间内ID递增
- **机器ID**: 不同机器生成的ID通过机器ID区分避免冲突
### 2. 业务层保证
- **业务前缀**: 高位业务标识确保不同业务类型ID有序分布
- **时间戳保留**: 保留39位给Snowflake算法确保时间精度
- **全局锁**: Mutex确保生成过程原子性
### 3. 数学证明
设两个ID生成时间为 t1 < t2
1. **不同毫秒**: timestamp(t2) > timestamp(t1) → ID2 > ID1
2. **相同毫秒**: sequence(t2) > sequence(t1) → ID2 > ID1
3. **业务前缀相同**: 低39位Snowflake部分决定大小关系
4. **业务前缀不同**: 高位业务标识决定大小关系
## 性能特征
### 1. 生成速度
- **理论QPS**: 每毫秒最多4096个ID (12位序列号)
- **实际测试**: 并发生成50个ID无延迟
- **锁竞争**: Mutex保护下的原子操作性能良好
### 2. 存储效率
- **位长度**: 63位适合i64存储
- **字符串长度**: 约19位十进制数字
- **索引友好**: 数值类型,数据库索引效率高
## 结论
**验证通过**: ID生成器完全满足"后生成的ID永远比前面生成的ID大"的要求
### 核心保证机制:
1. **时间递增**: Snowflake算法的时间戳机制
2. **序列递增**: 同毫秒内序列号递增
3. **业务隔离**: 不同业务类型通过高位区分
4. **并发安全**: Mutex保证原子性操作
5. **分布式支持**: 机器ID和节点ID避免多实例冲突
### 适用场景:
- ✅ 数据库主键 (保证唯一性和递增性)
- ✅ 分布式系统ID (支持多节点部署)
- ✅ 日志追踪ID (时间有序,便于查询)
- ✅ 业务流水号 (业务类型区分,全局唯一)
### 注意事项:
- 依赖系统时钟,时钟回拨可能影响递增性
- 单机QPS限制在4096/ms超出需要优化
- 业务类型规划需要提前设计,避免冲突

2227
docs/MIDDLEWARES.md Normal file

File diff suppressed because it is too large Load Diff

1650
docs/MODELS.md Normal file

File diff suppressed because it is too large Load Diff

137
docs/PROJECT_OVERVIEW.md Normal file
View File

@ -0,0 +1,137 @@
# UdminAI 项目总览
## 项目简介
UdminAI 是一个基于 Rust + React 的现代化流程管理和自动化平台,提供可视化流程编辑、定时任务调度、用户权限管理等功能。
## 技术架构
### 后端技术栈
- **框架**: Axum (异步 Web 框架)
- **数据库**: SeaORM (支持 MySQL/PostgreSQL/SQLite)
- **缓存**: Redis
- **认证**: JWT + Argon2 密码哈希
- **定时任务**: tokio-cron-scheduler
- **流程引擎**: 自研流程执行引擎
- **实时通信**: WebSocket + SSE
### 前端技术栈
- **框架**: React 18 + TypeScript
- **UI 库**: Semi Design + Ant Design
- **流程编辑器**: @flowgram.ai 系列组件
- **状态管理**: React Context
- **路由**: React Router v6
- **HTTP 客户端**: Axios
## 项目结构
```
udmin_ai/
├── backend/ # Rust 后端服务
│ ├── src/
│ │ ├── flow/ # 流程引擎核心
│ │ ├── models/ # 数据模型
│ │ ├── services/ # 业务逻辑层
│ │ ├── routes/ # API 路由
│ │ ├── middlewares/ # 中间件
│ │ └── utils/ # 工具函数
│ └── migration/ # 数据库迁移
├── frontend/ # React 前端应用
│ └── src/
│ ├── flows/ # 流程编辑器
│ ├── pages/ # 页面组件
│ ├── components/ # 通用组件
│ └── utils/ # 工具函数
├── docs/ # 项目文档
├── scripts/ # 部署脚本
└── README.md
```
## 核心功能模块
### 1. 用户权限管理
- 用户管理 (Users)
- 角色管理 (Roles)
- 菜单权限 (Menus)
- 部门管理 (Departments)
- 职位管理 (Positions)
### 2. 流程管理
- 可视化流程编辑器
- 流程执行引擎
- 流程运行日志
- 多种节点类型支持 (HTTP、数据库、脚本、条件等)
### 3. 定时任务
- Cron 表达式支持
- 任务调度管理
- 执行状态监控
### 4. 系统监控
- 请求日志记录
- 系统运行状态
- 实时通信支持
## 部署架构
### 服务端口分配
- **HTTP API**: 9898 (可配置)
- **WebSocket**: 8877 (可配置)
- **SSE**: 8866 (可配置)
- **前端开发服务器**: 8888
### 环境配置
- 开发环境: `.env`
- 生产环境: `.env.prod`
- 测试环境: `.env.staging`
## 开发指南
### 后端开发
```bash
cd backend
cargo run # 开发模式
cargo build --release # 生产构建
```
### 前端开发
```bash
cd frontend
npm install
npm run dev # 开发服务器
npm run build # 生产构建
```
### 数据库迁移
```bash
cd backend/migration
cargo run
```
## API 文档
项目集成了 Swagger UI启动后端服务后可访问
- Swagger UI: `http://localhost:9898/swagger-ui/`
- OpenAPI JSON: `http://localhost:9898/api-docs/openapi.json`
## 安全特性
- JWT 令牌认证
- Argon2 密码哈希
- CORS 跨域保护
- 请求日志记录
- 权限中间件验证
## 扩展性设计
- 模块化架构,易于扩展新功能
- 插件化流程节点,支持自定义执行器
- 微服务友好的设计
- 支持水平扩展
## 监控和日志
- 结构化日志 (tracing)
- 请求链路追踪
- 性能监控
- 错误处理和报告

1161
docs/RESPONSE.md Normal file

File diff suppressed because it is too large Load Diff

1201
docs/ROUTES.md Normal file

File diff suppressed because it is too large Load Diff

853
docs/SERVICES.md Normal file
View File

@ -0,0 +1,853 @@
# 服务层文档
## 概述
服务层是 UdminAI 的业务逻辑核心,负责处理各种业务操作,包括用户管理、权限控制、流程管理、定时任务、系统监控等功能。
## 架构设计
### 服务模块结构
```
services/
├── mod.rs # 服务模块导出
├── user_service.rs # 用户服务
├── role_service.rs # 角色服务
├── permission_service.rs # 权限服务
├── flow_service.rs # 流程服务
├── schedule_job_service.rs # 定时任务服务
├── system_service.rs # 系统服务
├── log_service.rs # 日志服务
└── notification_service.rs # 通知服务
```
### 设计原则
- **单一职责**: 每个服务专注于特定业务领域
- **依赖注入**: 通过参数传递数据库连接等依赖
- **错误处理**: 统一的错误处理和返回格式
- **异步支持**: 所有服务方法都是异步的
- **事务支持**: 支持数据库事务操作
## 用户服务 (user_service.rs)
### 核心功能
#### 1. 用户认证
```rust
/// 用户登录验证
pub async fn authenticate(
db: &DatabaseConnection,
username: &str,
password: &str,
) -> Result<UserDoc, AppError>
```
**功能**:
- 用户名/密码验证
- 密码哈希比较
- 登录状态更新
- 登录日志记录
#### 2. 用户管理
```rust
/// 创建用户
pub async fn create_user(
db: &DatabaseConnection,
req: CreateUserReq,
) -> Result<UserDoc, AppError>
/// 更新用户信息
pub async fn update_user(
db: &DatabaseConnection,
id: &str,
req: UpdateUserReq,
) -> Result<UserDoc, AppError>
/// 删除用户
pub async fn delete_user(
db: &DatabaseConnection,
id: &str,
) -> Result<(), AppError>
```
#### 3. 用户查询
```rust
/// 分页查询用户列表
pub async fn list_users(
db: &DatabaseConnection,
page: u64,
page_size: u64,
filters: Option<UserFilters>,
) -> Result<PageResp<UserDoc>, AppError>
/// 根据ID获取用户
pub async fn get_user_by_id(
db: &DatabaseConnection,
id: &str,
) -> Result<Option<UserDoc>, AppError>
```
### 数据传输对象
#### UserDoc - 用户文档
```rust
#[derive(Debug, Serialize, Deserialize)]
pub struct UserDoc {
pub id: String,
pub username: String,
pub email: Option<String>,
pub display_name: Option<String>,
pub avatar: Option<String>,
pub status: UserStatus,
pub roles: Vec<String>,
pub created_at: DateTime<FixedOffset>,
pub updated_at: DateTime<FixedOffset>,
}
```
#### CreateUserReq - 创建用户请求
```rust
#[derive(Debug, Deserialize, Validate)]
pub struct CreateUserReq {
#[validate(length(min = 3, max = 50))]
pub username: String,
#[validate(length(min = 6))]
pub password: String,
#[validate(email)]
pub email: Option<String>,
pub display_name: Option<String>,
pub roles: Vec<String>,
}
```
### 安全特性
- **密码加密**: 使用 bcrypt 进行密码哈希
- **输入验证**: 使用 validator 进行参数验证
- **权限检查**: 集成权限服务进行访问控制
- **审计日志**: 记录用户操作日志
## 角色服务 (role_service.rs)
### 核心功能
#### 1. 角色管理
```rust
/// 创建角色
pub async fn create_role(
db: &DatabaseConnection,
req: CreateRoleReq,
) -> Result<RoleDoc, AppError>
/// 更新角色
pub async fn update_role(
db: &DatabaseConnection,
id: &str,
req: UpdateRoleReq,
) -> Result<RoleDoc, AppError>
/// 删除角色
pub async fn delete_role(
db: &DatabaseConnection,
id: &str,
) -> Result<(), AppError>
```
#### 2. 权限分配
```rust
/// 为角色分配权限
pub async fn assign_permissions(
db: &DatabaseConnection,
role_id: &str,
permission_ids: Vec<String>,
) -> Result<(), AppError>
/// 移除角色权限
pub async fn remove_permissions(
db: &DatabaseConnection,
role_id: &str,
permission_ids: Vec<String>,
) -> Result<(), AppError>
```
#### 3. 用户角色管理
```rust
/// 为用户分配角色
pub async fn assign_user_roles(
db: &DatabaseConnection,
user_id: &str,
role_ids: Vec<String>,
) -> Result<(), AppError>
/// 获取用户角色
pub async fn get_user_roles(
db: &DatabaseConnection,
user_id: &str,
) -> Result<Vec<RoleDoc>, AppError>
```
### 数据传输对象
#### RoleDoc - 角色文档
```rust
#[derive(Debug, Serialize, Deserialize)]
pub struct RoleDoc {
pub id: String,
pub name: String,
pub description: Option<String>,
pub permissions: Vec<String>,
pub is_system: bool,
pub created_at: DateTime<FixedOffset>,
pub updated_at: DateTime<FixedOffset>,
}
```
## 权限服务 (permission_service.rs)
### 核心功能
#### 1. 权限检查
```rust
/// 检查用户权限
pub async fn check_user_permission(
db: &DatabaseConnection,
user_id: &str,
resource: &str,
action: &str,
) -> Result<bool, AppError>
/// 检查角色权限
pub async fn check_role_permission(
db: &DatabaseConnection,
role_id: &str,
resource: &str,
action: &str,
) -> Result<bool, AppError>
```
#### 2. 权限管理
```rust
/// 创建权限
pub async fn create_permission(
db: &DatabaseConnection,
req: CreatePermissionReq,
) -> Result<PermissionDoc, AppError>
/// 获取权限树
pub async fn get_permission_tree(
db: &DatabaseConnection,
) -> Result<Vec<PermissionTreeNode>, AppError>
```
### 权限模型
#### 资源-动作模型
- **资源 (Resource)**: 系统中的实体 (user, role, flow, job)
- **动作 (Action)**: 对资源的操作 (create, read, update, delete)
- **权限 (Permission)**: 资源和动作的组合
#### 权限继承
- 角色权限继承
- 用户权限继承
- 权限组合计算
## 流程服务 (flow_service.rs)
### 核心功能
#### 1. 流程管理
```rust
/// 创建流程
pub async fn create_flow(
db: &DatabaseConnection,
req: CreateFlowReq,
) -> Result<FlowDoc, AppError>
/// 更新流程
pub async fn update_flow(
db: &DatabaseConnection,
id: &str,
req: UpdateFlowReq,
) -> Result<FlowDoc, AppError>
/// 发布流程
pub async fn publish_flow(
db: &DatabaseConnection,
id: &str,
) -> Result<FlowDoc, AppError>
```
#### 2. 流程执行
```rust
/// 执行流程
pub async fn execute_flow(
db: &DatabaseConnection,
flow_id: &str,
input: serde_json::Value,
options: ExecuteOptions,
) -> Result<ExecutionResult, AppError>
/// 获取执行状态
pub async fn get_execution_status(
db: &DatabaseConnection,
execution_id: &str,
) -> Result<ExecutionStatus, AppError>
```
#### 3. 流程版本管理
```rust
/// 创建流程版本
pub async fn create_flow_version(
db: &DatabaseConnection,
flow_id: &str,
version_data: FlowVersionData,
) -> Result<FlowVersionDoc, AppError>
/// 获取流程版本列表
pub async fn list_flow_versions(
db: &DatabaseConnection,
flow_id: &str,
) -> Result<Vec<FlowVersionDoc>, AppError>
```
### 数据传输对象
#### FlowDoc - 流程文档
```rust
#[derive(Debug, Serialize, Deserialize)]
pub struct FlowDoc {
pub id: String,
pub name: String,
pub description: Option<String>,
pub category: String,
pub status: FlowStatus,
pub version: String,
pub design: serde_json::Value,
pub created_by: String,
pub created_at: DateTime<FixedOffset>,
pub updated_at: DateTime<FixedOffset>,
}
```
#### ExecutionResult - 执行结果
```rust
#[derive(Debug, Serialize, Deserialize)]
pub struct ExecutionResult {
pub execution_id: String,
pub status: ExecutionStatus,
pub result: Option<serde_json::Value>,
pub error: Option<String>,
pub start_time: DateTime<FixedOffset>,
pub end_time: Option<DateTime<FixedOffset>>,
pub duration_ms: Option<u64>,
}
```
## 定时任务服务 (schedule_job_service.rs)
### 核心功能
#### 1. 任务管理
```rust
/// 创建定时任务
pub async fn create_schedule_job(
db: &DatabaseConnection,
req: CreateScheduleJobReq,
) -> Result<ScheduleJobDoc, AppError>
/// 更新任务
pub async fn update_schedule_job(
db: &DatabaseConnection,
id: &str,
req: UpdateScheduleJobReq,
) -> Result<ScheduleJobDoc, AppError>
/// 启用/禁用任务
pub async fn toggle_job_status(
db: &DatabaseConnection,
id: &str,
enabled: bool,
) -> Result<ScheduleJobDoc, AppError>
```
#### 2. 任务调度
```rust
/// 注册任务到调度器
pub async fn register_job_to_scheduler(
scheduler: &JobScheduler,
job: &ScheduleJobDoc,
) -> Result<(), AppError>
/// 从调度器移除任务
pub async fn unregister_job_from_scheduler(
scheduler: &JobScheduler,
job_id: &str,
) -> Result<(), AppError>
```
#### 3. 执行历史
```rust
/// 记录任务执行
pub async fn record_job_execution(
db: &DatabaseConnection,
execution: JobExecutionRecord,
) -> Result<(), AppError>
/// 获取执行历史
pub async fn get_job_execution_history(
db: &DatabaseConnection,
job_id: &str,
page: u64,
page_size: u64,
) -> Result<PageResp<JobExecutionRecord>, AppError>
```
### 调度特性
- **Cron 表达式**: 支持标准 Cron 表达式
- **时区支持**: 支持不同时区的任务调度
- **并发控制**: 防止任务重复执行
- **失败重试**: 支持任务失败重试
- **执行超时**: 支持任务执行超时控制
## 系统服务 (system_service.rs)
### 核心功能
#### 1. 系统信息
```rust
/// 获取系统信息
pub async fn get_system_info() -> Result<SystemInfo, AppError>
/// 获取系统状态
pub async fn get_system_status(
db: &DatabaseConnection,
redis: &RedisConnection,
) -> Result<SystemStatus, AppError>
```
#### 2. 健康检查
```rust
/// 数据库健康检查
pub async fn check_database_health(
db: &DatabaseConnection,
) -> Result<HealthStatus, AppError>
/// Redis 健康检查
pub async fn check_redis_health(
redis: &RedisConnection,
) -> Result<HealthStatus, AppError>
```
#### 3. 系统配置
```rust
/// 获取系统配置
pub async fn get_system_config(
db: &DatabaseConnection,
) -> Result<SystemConfig, AppError>
/// 更新系统配置
pub async fn update_system_config(
db: &DatabaseConnection,
config: SystemConfig,
) -> Result<(), AppError>
```
### 监控指标
#### SystemStatus - 系统状态
```rust
#[derive(Debug, Serialize, Deserialize)]
pub struct SystemStatus {
pub uptime: u64,
pub memory_usage: MemoryUsage,
pub cpu_usage: f64,
pub disk_usage: DiskUsage,
pub database_status: HealthStatus,
pub redis_status: HealthStatus,
pub active_connections: u32,
pub request_count: u64,
pub error_count: u64,
}
```
## 日志服务 (log_service.rs)
### 核心功能
#### 1. 日志记录
```rust
/// 记录操作日志
pub async fn log_operation(
db: &DatabaseConnection,
log: OperationLog,
) -> Result<(), AppError>
/// 记录系统日志
pub async fn log_system_event(
db: &DatabaseConnection,
event: SystemEvent,
) -> Result<(), AppError>
```
#### 2. 日志查询
```rust
/// 查询操作日志
pub async fn query_operation_logs(
db: &DatabaseConnection,
filters: LogFilters,
page: u64,
page_size: u64,
) -> Result<PageResp<OperationLogDoc>, AppError>
/// 查询系统日志
pub async fn query_system_logs(
db: &DatabaseConnection,
filters: LogFilters,
page: u64,
page_size: u64,
) -> Result<PageResp<SystemLogDoc>, AppError>
```
#### 3. 日志分析
```rust
/// 获取日志统计
pub async fn get_log_statistics(
db: &DatabaseConnection,
time_range: TimeRange,
) -> Result<LogStatistics, AppError>
/// 获取错误日志趋势
pub async fn get_error_log_trend(
db: &DatabaseConnection,
time_range: TimeRange,
) -> Result<Vec<ErrorTrendPoint>, AppError>
```
### 日志类型
#### OperationLog - 操作日志
```rust
#[derive(Debug, Serialize, Deserialize)]
pub struct OperationLog {
pub user_id: String,
pub operation: String,
pub resource: String,
pub resource_id: Option<String>,
pub details: serde_json::Value,
pub ip_address: Option<String>,
pub user_agent: Option<String>,
pub timestamp: DateTime<FixedOffset>,
}
```
#### SystemEvent - 系统事件
```rust
#[derive(Debug, Serialize, Deserialize)]
pub struct SystemEvent {
pub event_type: String,
pub level: LogLevel,
pub message: String,
pub details: serde_json::Value,
pub source: String,
pub timestamp: DateTime<FixedOffset>,
}
```
## 通知服务 (notification_service.rs)
### 核心功能
#### 1. 通知发送
```rust
/// 发送邮件通知
pub async fn send_email_notification(
config: &EmailConfig,
notification: EmailNotification,
) -> Result<(), AppError>
/// 发送短信通知
pub async fn send_sms_notification(
config: &SmsConfig,
notification: SmsNotification,
) -> Result<(), AppError>
/// 发送系统通知
pub async fn send_system_notification(
db: &DatabaseConnection,
notification: SystemNotification,
) -> Result<(), AppError>
```
#### 2. 通知模板
```rust
/// 创建通知模板
pub async fn create_notification_template(
db: &DatabaseConnection,
template: NotificationTemplate,
) -> Result<NotificationTemplateDoc, AppError>
/// 渲染通知模板
pub async fn render_notification_template(
template: &NotificationTemplate,
variables: &serde_json::Value,
) -> Result<String, AppError>
```
#### 3. 通知历史
```rust
/// 记录通知历史
pub async fn record_notification_history(
db: &DatabaseConnection,
history: NotificationHistory,
) -> Result<(), AppError>
/// 查询通知历史
pub async fn query_notification_history(
db: &DatabaseConnection,
filters: NotificationFilters,
page: u64,
page_size: u64,
) -> Result<PageResp<NotificationHistoryDoc>, AppError>
```
### 通知渠道
- **邮件通知**: SMTP 邮件发送
- **短信通知**: SMS 短信发送
- **系统通知**: 站内消息通知
- **Webhook**: HTTP 回调通知
- **推送通知**: 移动端推送
## 服务集成
### 依赖注入
```rust
// 服务依赖注入示例
pub struct ServiceContainer {
pub db: DatabaseConnection,
pub redis: RedisConnection,
pub scheduler: JobScheduler,
pub email_config: EmailConfig,
pub sms_config: SmsConfig,
}
impl ServiceContainer {
pub fn user_service(&self) -> UserService {
UserService::new(&self.db)
}
pub fn flow_service(&self) -> FlowService {
FlowService::new(&self.db, &self.redis)
}
}
```
### 事务管理
```rust
/// 事务执行示例
pub async fn create_user_with_roles(
db: &DatabaseConnection,
user_req: CreateUserReq,
role_ids: Vec<String>,
) -> Result<UserDoc, AppError> {
let txn = db.begin().await?;
// 创建用户
let user = user_service::create_user(&txn, user_req).await?;
// 分配角色
role_service::assign_user_roles(&txn, &user.id, role_ids).await?;
txn.commit().await?;
Ok(user)
}
```
### 缓存策略
```rust
/// 缓存使用示例
pub async fn get_user_with_cache(
db: &DatabaseConnection,
redis: &RedisConnection,
user_id: &str,
) -> Result<Option<UserDoc>, AppError> {
// 先从缓存获取
if let Some(cached_user) = redis.get(&format!("user:{}", user_id)).await? {
return Ok(Some(serde_json::from_str(&cached_user)?));
}
// 从数据库获取
if let Some(user) = user_service::get_user_by_id(db, user_id).await? {
// 写入缓存
redis.setex(
&format!("user:{}", user_id),
3600, // 1小时过期
&serde_json::to_string(&user)?,
).await?;
Ok(Some(user))
} else {
Ok(None)
}
}
```
## 错误处理
### 统一错误类型
```rust
#[derive(Debug, thiserror::Error)]
pub enum ServiceError {
#[error("数据库错误: {0}")]
DatabaseError(#[from] sea_orm::DbErr),
#[error("验证错误: {0}")]
ValidationError(String),
#[error("权限不足")]
PermissionDenied,
#[error("资源不存在: {0}")]
ResourceNotFound(String),
#[error("业务逻辑错误: {0}")]
BusinessLogicError(String),
}
```
### 错误处理模式
```rust
/// 统一错误处理
pub async fn handle_service_result<T>(
result: Result<T, ServiceError>,
) -> Result<T, AppError> {
match result {
Ok(value) => Ok(value),
Err(ServiceError::ValidationError(msg)) => {
Err(AppError::BadRequest(msg))
},
Err(ServiceError::PermissionDenied) => {
Err(AppError::Forbidden("权限不足".to_string()))
},
Err(ServiceError::ResourceNotFound(resource)) => {
Err(AppError::NotFound(format!("{}不存在", resource)))
},
Err(e) => Err(AppError::InternalServerError(e.to_string())),
}
}
```
## 性能优化
### 数据库优化
- **连接池**: 使用数据库连接池
- **查询优化**: 优化 SQL 查询语句
- **索引使用**: 合理使用数据库索引
- **批量操作**: 使用批量插入/更新
### 缓存优化
- **热点数据缓存**: 缓存频繁访问的数据
- **查询结果缓存**: 缓存复杂查询结果
- **缓存预热**: 系统启动时预加载缓存
- **缓存更新**: 及时更新过期缓存
### 并发优化
- **异步处理**: 使用异步编程模型
- **并发控制**: 合理控制并发数量
- **锁优化**: 减少锁的使用和持有时间
- **无锁设计**: 使用无锁数据结构
## 测试策略
### 单元测试
```rust
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_create_user() {
let db = setup_test_db().await;
let req = CreateUserReq {
username: "test_user".to_string(),
password: "password123".to_string(),
email: Some("test@example.com".to_string()),
display_name: None,
roles: vec![],
};
let result = create_user(&db, req).await;
assert!(result.is_ok());
let user = result.unwrap();
assert_eq!(user.username, "test_user");
}
}
```
### 集成测试
```rust
#[tokio::test]
async fn test_user_role_integration() {
let container = setup_test_container().await;
// 创建角色
let role = create_test_role(&container.db).await;
// 创建用户
let user = create_test_user(&container.db).await;
// 分配角色
let result = role_service::assign_user_roles(
&container.db,
&user.id,
vec![role.id.clone()],
).await;
assert!(result.is_ok());
// 验证权限
let has_permission = permission_service::check_user_permission(
&container.db,
&user.id,
"user",
"read",
).await.unwrap();
assert!(has_permission);
}
```
## 最佳实践
### 服务设计
- **接口设计**: 设计清晰的服务接口
- **参数验证**: 严格验证输入参数
- **返回值**: 统一返回值格式
- **文档注释**: 为公开方法添加文档注释
### 数据处理
- **数据验证**: 在服务层进行数据验证
- **数据转换**: 合理进行数据类型转换
- **数据清理**: 及时清理无用数据
- **数据备份**: 重要操作前备份数据
### 安全考虑
- **权限检查**: 在服务层进行权限检查
- **输入过滤**: 过滤恶意输入
- **敏感数据**: 保护敏感数据不泄露
- **审计日志**: 记录重要操作的审计日志

1271
docs/UTILS.md Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,346 @@
# 创建自由布局画布
本案例可通过 `npx @flowgram.ai/create-app@latest free-layout-simple` 安装,完整代码及效果见:
<div className="rs-tip">
<a className="rs-link" href="/examples/free-layout/free-layout-simple.html">
自由布局基础用法
</a>
</div>
文件结构:
```
- hooks
- use-editor-props.ts # 画布配置
- components
- base-node.tsx # 节点渲染
- tools.tsx # 画布工具栏
- initial-data.ts # 初始化数据
- node-registries.ts # 节点配置
- app.tsx # 画布入口
```
### 1. 画布入口
* `FreeLayoutEditorProvider`: 画布配置器, 内部会生成 react-context 供子组件消费
* `EditorRenderer`: 为最终渲染的画布,可以包装在其他组件下边方便定制画布位置
```tsx pure title="app.tsx"
import {
FreeLayoutEditorProvider,
EditorRenderer,
} from '@flowgram.ai/free-layout-editor';
import '@flowgram.ai/free-layout-editor/index.css'; // 加载样式
import { useEditorProps } from './use-editor-props' // 画布详细的 props 配置
import { Tools } from './components/tools' // 画布工具
function App() {
const editorProps = useEditorProps()
return (
<FreeLayoutEditorProvider {...editorProps}>
<EditorRenderer className="demo-editor" />
<Tools />
</FreeLayoutEditorProvider>
);
}
```
### 2. 配置画布
画布配置采用声明式,提供 数据、渲染、事件、插件相关配置
```tsx pure title="hooks/use-editor-props.tsx"
import { useMemo } from 'react';
import { type FreeLayoutProps } from '@flowgram.ai/free-layout-editor';
import { createMinimapPlugin } from '@flowgram.ai/minimap-plugin';
import { initialData } from './initial-data' // 初始化数据
import { nodeRegistries } from './node-registries' // 节点声明配置
import { BaseNode } from './components/base-node' // 节点渲染
export function useEditorProps(
): FreeLayoutProps {
return useMemo<FreeLayoutProps>(
() => ({
/**
* 初始化数据
*/
initialData,
/**
* 画布节点定义
*/
nodeRegistries,
/**
* 物料
*/
materials: {
renderDefaultNode: BaseNode, // 节点渲染组件
},
/**
* 节点引擎, 用于渲染节点表单
*/
nodeEngine: {
enable: true,
},
/**
* 画布历史记录, 用于控制 redo/undo
*/
history: {
enable: true,
enableChangeNode: true, // 用于监听节点表单数据变化
},
/**
* 画布初始化回调
*/
onInit: ctx => {
// 如果要动态加载数据,可以通过如下方法异步执行
// ctx.docuemnt.fromJSON(initialData)
},
/**
* 画布第一次渲染完整回调
*/
onAllLayersRendered: (ctx) => {},
/**
* 画布销毁回调
*/
onDispose: () => { },
plugins: () => [
/**
* 缩略图插件
*/
createMinimapPlugin({}),
],
}),
[],
);
}
```
### 3. 配置数据
画布文档数据采用树形结构,支持嵌套
:::note 文档数据基本结构:
* nodes `array` 节点列表, 支持嵌套
* edges `array` 边列表
:::
:::note 节点数据基本结构:
* id: `string` 节点唯一标识, 必须保证唯一
* meta: `object` 节点的 ui 配置信息,如自由布局的 `position` 信息放这里
* type: `string | number` 节点类型,会和 `nodeRegistries` 中的 `type` 对应
* data: `object` 节点表单数据, 业务可自定义
* blocks: `array` 节点的分支, 采用 `block` 更贴近 `Gramming`, 目前会存子画布的节点
* edges: `array` 子画布的边数据
:::
:::note 边数据基本结构:
* sourceNodeID: `string` 开始节点 id
* targetNodeID: `string` 目标节点 id
* sourcePortID?: `string | number` 开始端口 id, 缺省则采用开始节点的默认端口
* targetPortID?: `string | number` 目标端口 id, 缺省则采用目标节点的默认端口
:::
```tsx pure title="initial-data.ts"
import { WorkflowJSON } from '@flowgram.ai/free-layout-editor';
export const initialData: WorkflowJSON = {
nodes: [
{
id: 'start_0',
type: 'start',
meta: {
position: { x: 0, y: 0 },
},
data: {
title: 'Start',
content: 'Start content'
},
},
{
id: 'node_0',
type: 'custom',
meta: {
position: { x: 400, y: 0 },
},
data: {
title: 'Custom',
content: 'Custom node content'
},
},
{
id: 'end_0',
type: 'end',
meta: {
position: { x: 800, y: 0 },
},
data: {
title: 'End',
content: 'End content'
},
},
],
edges: [
{
sourceNodeID: 'start_0',
targetNodeID: 'node_0',
},
{
sourceNodeID: 'node_0',
targetNodeID: 'end_0',
},
],
};
```
### 4. 声明节点
声明节点可以用于确定节点的类型及渲染方式
```tsx pure title="node-registries.tsx"
import { WorkflowNodeRegistry, ValidateTrigger } from '@flowgram.ai/free-layout-editor';
/**
* You can customize your own node registry
* 你可以自定义节点的注册器
*/
export const nodeRegistries: WorkflowNodeRegistry[] = [
{
type: 'start',
meta: {
isStart: true, // 标记为开始节点
deleteDisable: true, // 开始节点不能删除
copyDisable: true, // 开始节点不能复制
defaultPorts: [{ type: 'output' }], // 用于定义节点的输入和输出端口, 开始节点只有输出端口
// useDynamicPort: true, // 用于动态端口,会寻找 data-port-id 和 data-port-type 属性的 dom 作为端口
},
/**
* 配置节点表单的校验及渲染,
* 注validate 采用数据和渲染分离,保证节点即使不渲染也能对数据做校验
*/
formMeta: {
validateTrigger: ValidateTrigger.onChange,
validate: {
title: ({ value }) => (value ? undefined : 'Title is required'),
},
/**
* Render form
*/
render: () => (
<>
<Field name="title">
{({ field }) => <div className="demo-free-node-title">{field.value}</div>}
</Field>
<Field name="content">
{({ field }) => <input onChange={field.onChange} value={field.value}/>}
</Field>
</>
)
},
},
{
type: 'end',
meta: {
deleteDisable: true,
copyDisable: true,
defaultPorts: [{ type: 'input' }],
},
formMeta: {
// ...
}
},
{
type: 'custom',
meta: {
},
formMeta: {
// ...
},
defaultPorts: [{ type: 'output' }, { type: 'input' }], // 普通节点有两个端口
},
];
```
### 5. 渲染节点
渲染节点用于添加样式、事件及表单渲染的位置
```tsx pure title="components/base-node.tsx"
import { useNodeRender, WorkflowNodeRenderer } from '@flowgram.ai/free-layout-editor';
export const BaseNode = () => {
/**
* 提供节点渲染相关的方法
*/
const { form } = useNodeRender()
/**
* WorkflowNodeRenderer 会添加节点拖拽事件及 端口渲染,如果要深度定制,可以看该组件源代码:
* https://github.com/bytedance/flowgram.ai/blob/main/packages/client/free-layout-editor/src/components/workflow-node-renderer.tsx
*/
return (
<WorkflowNodeRenderer className="demo-free-node" node={props.node}>
{
// 表单渲染通过 formMeta 生成
form?.render()
}
</WorkflowNodeRenderer>
)
};
```
### 6. 添加工具
工具主要用于控制画布缩放等操作, 工具汇总在 `usePlaygroundTools` 中, 而 `useClientContext` 用于获取画布的上下文, 里边包含画布的核心模块如 `history`
```tsx pure title="components/tools.tsx"
import { useEffect, useState } from 'react'
import { usePlaygroundTools, useClientContext } from '@flowgram.ai/free-layout-editor';
export function Tools() {
const { history } = useClientContext();
const tools = usePlaygroundTools();
const [canUndo, setCanUndo] = useState(false);
const [canRedo, setCanRedo] = useState(false);
useEffect(() => {
const disposable = history.undoRedoService.onChange(() => {
setCanUndo(history.canUndo());
setCanRedo(history.canRedo());
});
return () => disposable.dispose();
}, [history]);
return <div style={{ position: 'absolute', zIndex: 10, bottom: 16, left: 226, display: 'flex', gap: 8 }}>
<button onClick={() => tools.zoomin()}>ZoomIn</button>
<button onClick={() => tools.zoomout()}>ZoomOut</button>
<button onClick={() => tools.fitView()}>Fitview</button>
<button onClick={() => tools.autoLayout()}>AutoLayout</button>
<button onClick={() => history.undo()} disabled={!canUndo}>Undo</button>
<button onClick={() => history.redo()} disabled={!canRedo}>Redo</button>
<span>{Math.floor(tools.zoom * 100)}%</span>
</div>
}
```
### 7. 效果
import { FreeLayoutSimple } from '../../../../components';
<div style={{ position: 'relative', width: '100%', height: '600px'}}>
<FreeLayoutSimple />
</div>

View File

@ -0,0 +1,346 @@
# 创建自由布局画布
本案例可通过 `npx @flowgram.ai/create-app@latest free-layout-simple` 安装,完整代码及效果见:
<div className="rs-tip">
<a className="rs-link" href="/examples/free-layout/free-layout-simple.html">
自由布局基础用法
</a>
</div>
文件结构:
```
- hooks
- use-editor-props.ts # 画布配置
- components
- base-node.tsx # 节点渲染
- tools.tsx # 画布工具栏
- initial-data.ts # 初始化数据
- node-registries.ts # 节点配置
- app.tsx # 画布入口
```
### 1. 画布入口
* `FreeLayoutEditorProvider`: 画布配置器, 内部会生成 react-context 供子组件消费
* `EditorRenderer`: 为最终渲染的画布,可以包装在其他组件下边方便定制画布位置
```tsx pure title="app.tsx"
import {
FreeLayoutEditorProvider,
EditorRenderer,
} from '@flowgram.ai/free-layout-editor';
import '@flowgram.ai/free-layout-editor/index.css'; // 加载样式
import { useEditorProps } from './use-editor-props' // 画布详细的 props 配置
import { Tools } from './components/tools' // 画布工具
function App() {
const editorProps = useEditorProps()
return (
<FreeLayoutEditorProvider {...editorProps}>
<EditorRenderer className="demo-editor" />
<Tools />
</FreeLayoutEditorProvider>
);
}
```
### 2. 配置画布
画布配置采用声明式,提供 数据、渲染、事件、插件相关配置
```tsx pure title="hooks/use-editor-props.tsx"
import { useMemo } from 'react';
import { type FreeLayoutProps } from '@flowgram.ai/free-layout-editor';
import { createMinimapPlugin } from '@flowgram.ai/minimap-plugin';
import { initialData } from './initial-data' // 初始化数据
import { nodeRegistries } from './node-registries' // 节点声明配置
import { BaseNode } from './components/base-node' // 节点渲染
export function useEditorProps(
): FreeLayoutProps {
return useMemo<FreeLayoutProps>(
() => ({
/**
* 初始化数据
*/
initialData,
/**
* 画布节点定义
*/
nodeRegistries,
/**
* 物料
*/
materials: {
renderDefaultNode: BaseNode, // 节点渲染组件
},
/**
* 节点引擎, 用于渲染节点表单
*/
nodeEngine: {
enable: true,
},
/**
* 画布历史记录, 用于控制 redo/undo
*/
history: {
enable: true,
enableChangeNode: true, // 用于监听节点表单数据变化
},
/**
* 画布初始化回调
*/
onInit: ctx => {
// 如果要动态加载数据,可以通过如下方法异步执行
// ctx.docuemnt.fromJSON(initialData)
},
/**
* 画布第一次渲染完整回调
*/
onAllLayersRendered: (ctx) => {},
/**
* 画布销毁回调
*/
onDispose: () => { },
plugins: () => [
/**
* 缩略图插件
*/
createMinimapPlugin({}),
],
}),
[],
);
}
```
### 3. 配置数据
画布文档数据采用树形结构,支持嵌套
:::note 文档数据基本结构:
* nodes `array` 节点列表, 支持嵌套
* edges `array` 边列表
:::
:::note 节点数据基本结构:
* id: `string` 节点唯一标识, 必须保证唯一
* meta: `object` 节点的 ui 配置信息,如自由布局的 `position` 信息放这里
* type: `string | number` 节点类型,会和 `nodeRegistries` 中的 `type` 对应
* data: `object` 节点表单数据, 业务可自定义
* blocks: `array` 节点的分支, 采用 `block` 更贴近 `Gramming`, 目前会存子画布的节点
* edges: `array` 子画布的边数据
:::
:::note 边数据基本结构:
* sourceNodeID: `string` 开始节点 id
* targetNodeID: `string` 目标节点 id
* sourcePortID?: `string | number` 开始端口 id, 缺省则采用开始节点的默认端口
* targetPortID?: `string | number` 目标端口 id, 缺省则采用目标节点的默认端口
:::
```tsx pure title="initial-data.ts"
import { WorkflowJSON } from '@flowgram.ai/free-layout-editor';
export const initialData: WorkflowJSON = {
nodes: [
{
id: 'start_0',
type: 'start',
meta: {
position: { x: 0, y: 0 },
},
data: {
title: 'Start',
content: 'Start content'
},
},
{
id: 'node_0',
type: 'custom',
meta: {
position: { x: 400, y: 0 },
},
data: {
title: 'Custom',
content: 'Custom node content'
},
},
{
id: 'end_0',
type: 'end',
meta: {
position: { x: 800, y: 0 },
},
data: {
title: 'End',
content: 'End content'
},
},
],
edges: [
{
sourceNodeID: 'start_0',
targetNodeID: 'node_0',
},
{
sourceNodeID: 'node_0',
targetNodeID: 'end_0',
},
],
};
```
### 4. 声明节点
声明节点可以用于确定节点的类型及渲染方式
```tsx pure title="node-registries.tsx"
import { WorkflowNodeRegistry, ValidateTrigger } from '@flowgram.ai/free-layout-editor';
/**
* You can customize your own node registry
* 你可以自定义节点的注册器
*/
export const nodeRegistries: WorkflowNodeRegistry[] = [
{
type: 'start',
meta: {
isStart: true, // 标记为开始节点
deleteDisable: true, // 开始节点不能删除
copyDisable: true, // 开始节点不能复制
defaultPorts: [{ type: 'output' }], // 用于定义节点的输入和输出端口, 开始节点只有输出端口
// useDynamicPort: true, // 用于动态端口,会寻找 data-port-id 和 data-port-type 属性的 dom 作为端口
},
/**
* 配置节点表单的校验及渲染,
* 注validate 采用数据和渲染分离,保证节点即使不渲染也能对数据做校验
*/
formMeta: {
validateTrigger: ValidateTrigger.onChange,
validate: {
title: ({ value }) => (value ? undefined : 'Title is required'),
},
/**
* Render form
*/
render: () => (
<>
<Field name="title">
{({ field }) => <div className="demo-free-node-title">{field.value}</div>}
</Field>
<Field name="content">
{({ field }) => <input onChange={field.onChange} value={field.value}/>}
</Field>
</>
)
},
},
{
type: 'end',
meta: {
deleteDisable: true,
copyDisable: true,
defaultPorts: [{ type: 'input' }],
},
formMeta: {
// ...
}
},
{
type: 'custom',
meta: {
},
formMeta: {
// ...
},
defaultPorts: [{ type: 'output' }, { type: 'input' }], // 普通节点有两个端口
},
];
```
### 5. 渲染节点
渲染节点用于添加样式、事件及表单渲染的位置
```tsx pure title="components/base-node.tsx"
import { useNodeRender, WorkflowNodeRenderer } from '@flowgram.ai/free-layout-editor';
export const BaseNode = () => {
/**
* 提供节点渲染相关的方法
*/
const { form } = useNodeRender()
/**
* WorkflowNodeRenderer 会添加节点拖拽事件及 端口渲染,如果要深度定制,可以看该组件源代码:
* https://github.com/bytedance/flowgram.ai/blob/main/packages/client/free-layout-editor/src/components/workflow-node-renderer.tsx
*/
return (
<WorkflowNodeRenderer className="demo-free-node" node={props.node}>
{
// 表单渲染通过 formMeta 生成
form?.render()
}
</WorkflowNodeRenderer>
)
};
```
### 6. 添加工具
工具主要用于控制画布缩放等操作, 工具汇总在 `usePlaygroundTools` 中, 而 `useClientContext` 用于获取画布的上下文, 里边包含画布的核心模块如 `history`
```tsx pure title="components/tools.tsx"
import { useEffect, useState } from 'react'
import { usePlaygroundTools, useClientContext } from '@flowgram.ai/free-layout-editor';
export function Tools() {
const { history } = useClientContext();
const tools = usePlaygroundTools();
const [canUndo, setCanUndo] = useState(false);
const [canRedo, setCanRedo] = useState(false);
useEffect(() => {
const disposable = history.undoRedoService.onChange(() => {
setCanUndo(history.canUndo());
setCanRedo(history.canRedo());
});
return () => disposable.dispose();
}, [history]);
return <div style={{ position: 'absolute', zIndex: 10, bottom: 16, left: 226, display: 'flex', gap: 8 }}>
<button onClick={() => tools.zoomin()}>ZoomIn</button>
<button onClick={() => tools.zoomout()}>ZoomOut</button>
<button onClick={() => tools.fitView()}>Fitview</button>
<button onClick={() => tools.autoLayout()}>AutoLayout</button>
<button onClick={() => history.undo()} disabled={!canUndo}>Undo</button>
<button onClick={() => history.redo()} disabled={!canRedo}>Redo</button>
<span>{Math.floor(tools.zoom * 100)}%</span>
</div>
}
```
### 7. 效果
import { FreeLayoutSimple } from '../../../../components';
<div style={{ position: 'relative', width: '100%', height: '600px'}}>
<FreeLayoutSimple />
</div>

View File

@ -0,0 +1,635 @@
{
"nodes": [
{
"id": "start_0",
"type": "start",
"meta": {
"position": {
"x": 180,
"y": 573.7
}
},
"data": {
"title": "Start",
"outputs": {
"type": "object",
"properties": {
"query": {
"type": "string",
"default": "Hello Flow."
},
"enable": {
"type": "boolean",
"default": true
},
"array_obj": {
"type": "array",
"items": {
"type": "object",
"properties": {
"int": {
"type": "number"
},
"str": {
"type": "string"
}
}
}
}
}
}
}
},
{
"id": "condition_0",
"type": "condition",
"meta": {
"position": {
"x": 1100,
"y": 510.20000000000005
}
},
"data": {
"title": "Condition",
"conditions": [
{
"key": "if_0",
"value": {
"left": {
"type": "ref",
"content": [
"start_0",
"query"
]
},
"operator": "contains",
"right": {
"type": "constant",
"content": "Hello Flow."
}
}
},
{
"key": "if_f0rOAt",
"value": {
"left": {
"type": "ref",
"content": [
"start_0",
"enable"
]
},
"operator": "is_true"
}
}
]
}
},
{
"id": "end_0",
"type": "end",
"meta": {
"position": {
"x": 3008,
"y": 573.7
}
},
"data": {
"title": "End",
"inputsValues": {
"success": {
"type": "constant",
"content": true,
"schema": {
"type": "boolean"
}
},
"query": {
"type": "ref",
"content": [
"start_0",
"query"
]
}
},
"inputs": {
"type": "object",
"properties": {
"success": {
"type": "boolean"
},
"query": {
"type": "string"
}
}
}
}
},
{
"id": "159623",
"type": "comment",
"meta": {
"position": {
"x": 180,
"y": 756.7
}
},
"data": {
"size": {
"width": 240,
"height": 150
},
"note": "hi ~\n\nthis is a comment node\n\n- flowgram.ai"
}
},
{
"id": "http_rDGIH",
"type": "http",
"meta": {
"position": {
"x": 640,
"y": 447.35
}
},
"data": {
"title": "HTTP_1",
"outputs": {
"type": "object",
"properties": {
"body": {
"type": "string"
},
"headers": {
"type": "object"
},
"statusCode": {
"type": "integer"
}
}
},
"api": {
"method": "GET",
"url": {
"type": "template",
"content": ""
}
},
"body": {
"bodyType": "JSON"
},
"timeout": {
"timeout": 10000,
"retryTimes": 1
}
}
},
{
"id": "loop_Ycnsk",
"type": "loop",
"meta": {
"position": {
"x": 1480,
"y": 90.00000000000003
}
},
"data": {
"title": "Loop_1",
"loopFor": {
"type": "ref",
"content": [
"start_0",
"array_obj"
]
},
"loopOutputs": {
"acm": {
"type": "ref",
"content": [
"llm_6aSyo",
"result"
]
}
}
},
"blocks": [
{
"id": "llm_6aSyo",
"type": "llm",
"meta": {
"position": {
"x": 344,
"y": 0
}
},
"data": {
"title": "LLM_3",
"inputsValues": {
"modelName": {
"type": "constant",
"content": "gpt-3.5-turbo"
},
"apiKey": {
"type": "constant",
"content": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"apiHost": {
"type": "constant",
"content": "https://mock-ai-url/api/v3"
},
"temperature": {
"type": "constant",
"content": 0.5
},
"systemPrompt": {
"type": "template",
"content": "# Role\nYou are an AI assistant.\n"
},
"prompt": {
"type": "template",
"content": ""
}
},
"inputs": {
"type": "object",
"required": [
"modelName",
"apiKey",
"apiHost",
"temperature",
"prompt"
],
"properties": {
"modelName": {
"type": "string"
},
"apiKey": {
"type": "string"
},
"apiHost": {
"type": "string"
},
"temperature": {
"type": "number"
},
"systemPrompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
},
"prompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
}
}
},
"outputs": {
"type": "object",
"properties": {
"result": {
"type": "string"
}
}
}
}
},
{
"id": "llm_ZqKlP",
"type": "llm",
"meta": {
"position": {
"x": 804,
"y": 0
}
},
"data": {
"title": "LLM_4",
"inputsValues": {
"modelName": {
"type": "constant",
"content": "gpt-3.5-turbo"
},
"apiKey": {
"type": "constant",
"content": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"apiHost": {
"type": "constant",
"content": "https://mock-ai-url/api/v3"
},
"temperature": {
"type": "constant",
"content": 0.5
},
"systemPrompt": {
"type": "template",
"content": "# Role\nYou are an AI assistant.\n"
},
"prompt": {
"type": "template",
"content": ""
}
},
"inputs": {
"type": "object",
"required": [
"modelName",
"apiKey",
"apiHost",
"temperature",
"prompt"
],
"properties": {
"modelName": {
"type": "string"
},
"apiKey": {
"type": "string"
},
"apiHost": {
"type": "string"
},
"temperature": {
"type": "number"
},
"systemPrompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
},
"prompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
}
}
},
"outputs": {
"type": "object",
"properties": {
"result": {
"type": "string"
}
}
}
}
},
{
"id": "block_start_PUDtS",
"type": "block-start",
"meta": {
"position": {
"x": 32,
"y": 163.1
}
},
"data": {}
},
{
"id": "block_end_leBbs",
"type": "block-end",
"meta": {
"position": {
"x": 1116,
"y": 163.1
}
},
"data": {}
}
],
"edges": [
{
"sourceNodeID": "block_start_PUDtS",
"targetNodeID": "llm_6aSyo"
},
{
"sourceNodeID": "llm_6aSyo",
"targetNodeID": "llm_ZqKlP"
},
{
"sourceNodeID": "llm_ZqKlP",
"targetNodeID": "block_end_leBbs"
}
]
},
{
"id": "group_nYl6D",
"type": "group",
"meta": {
"position": {
"x": 1644,
"y": 730.2
}
},
"data": {
"parentID": "root",
"blockIDs": [
"llm_8--A3",
"llm_vTyMa"
]
}
},
{
"id": "llm_8--A3",
"type": "llm",
"meta": {
"position": {
"x": 180,
"y": 0
}
},
"data": {
"title": "LLM_1",
"inputsValues": {
"modelName": {
"type": "constant",
"content": "gpt-3.5-turbo"
},
"apiKey": {
"type": "constant",
"content": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"apiHost": {
"type": "constant",
"content": "https://mock-ai-url/api/v3"
},
"temperature": {
"type": "constant",
"content": 0.5
},
"systemPrompt": {
"type": "template",
"content": "# Role\nYou are an AI assistant.\n"
},
"prompt": {
"type": "template",
"content": "# User Input\nquery:{{start_0.query}}\nenable:{{start_0.enable}}"
}
},
"inputs": {
"type": "object",
"required": [
"modelName",
"apiKey",
"apiHost",
"temperature",
"prompt"
],
"properties": {
"modelName": {
"type": "string"
},
"apiKey": {
"type": "string"
},
"apiHost": {
"type": "string"
},
"temperature": {
"type": "number"
},
"systemPrompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
},
"prompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
}
}
},
"outputs": {
"type": "object",
"properties": {
"result": {
"type": "string"
}
}
}
}
},
{
"id": "llm_vTyMa",
"type": "llm",
"meta": {
"position": {
"x": 640,
"y": 10
}
},
"data": {
"title": "LLM_2",
"inputsValues": {
"modelName": {
"type": "constant",
"content": "gpt-3.5-turbo"
},
"apiKey": {
"type": "constant",
"content": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"apiHost": {
"type": "constant",
"content": "https://mock-ai-url/api/v3"
},
"temperature": {
"type": "constant",
"content": 0.5
},
"systemPrompt": {
"type": "template",
"content": "# Role\nYou are an AI assistant.\n"
},
"prompt": {
"type": "template",
"content": "# LLM Input\nresult:{{llm_8--A3.result}}"
}
},
"inputs": {
"type": "object",
"required": [
"modelName",
"apiKey",
"apiHost",
"temperature",
"prompt"
],
"properties": {
"modelName": {
"type": "string"
},
"apiKey": {
"type": "string"
},
"apiHost": {
"type": "string"
},
"temperature": {
"type": "number"
},
"systemPrompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
},
"prompt": {
"type": "string",
"extra": {
"formComponent": "prompt-editor"
}
}
}
},
"outputs": {
"type": "object",
"properties": {
"result": {
"type": "string"
}
}
}
}
}
],
"edges": [
{
"sourceNodeID": "start_0",
"targetNodeID": "http_rDGIH"
},
{
"sourceNodeID": "http_rDGIH",
"targetNodeID": "condition_0"
},
{
"sourceNodeID": "condition_0",
"targetNodeID": "llm_8--A3",
"sourcePortID": "if_f0rOAt"
},
{
"sourceNodeID": "condition_0",
"targetNodeID": "loop_Ycnsk",
"sourcePortID": "if_0"
},
{
"sourceNodeID": "llm_vTyMa",
"targetNodeID": "end_0"
},
{
"sourceNodeID": "loop_Ycnsk",
"targetNodeID": "end_0"
},
{
"sourceNodeID": "llm_8--A3",
"targetNodeID": "llm_vTyMa"
}
]
}

View File

@ -0,0 +1,412 @@
# 最佳实践
import { FreeFeatureOverview } from '../../../../components';
<FreeFeatureOverview />
## 安装
```shell
npx @flowgram.ai/create-app@latest free-layout
```
## 源码
https://github.com/bytedance/flowgram.ai/tree/main/apps/demo-free-layout
## 项目概览
### 核心技术栈
* **前端框架**: React 18 + TypeScript
* **构建工具**: Rsbuild (基于 Rspack 的现代构建工具)
* **样式方案**: Less + Styled Components + CSS Variables
* **UI 组件库**: Semi Design (@douyinfe/semi-ui)
* **状态管理**: 基于 Flowgram 自研的编辑器框架
* **依赖注入**: Inversify
### 核心依赖包
* **@flowgram.ai/free-layout-editor**: 自由布局编辑器核心依赖
* **@flowgram.ai/free-snap-plugin**: 自动对齐及辅助线插件
* **@flowgram.ai/free-lines-plugin**: 连线渲染插件
* **@flowgram.ai/free-node-panel-plugin**: 节点添加面板渲染插件
* **@flowgram.ai/minimap-plugin**: 缩略图插件
* **@flowgram.ai/free-container-plugin**: 子画布插件
* **@flowgram.ai/free-group-plugin**: 分组插件
* **@flowgram.ai/form-materials**: 表单物料
* **@flowgram.ai/runtime-interface**: 运行时接口
* **@flowgram.ai/runtime-js**: js 运行时模块
## 代码说明
### 目录结构
```
src/
├── app.tsx # 应用入口文件
├── editor.tsx # 编辑器主组件
├── initial-data.ts # 初始化数据配置
├── assets/ # 静态资源
├── components/ # 组件库
│ ├── index.ts
│ ├── add-node/ # 添加节点组件
│ ├── base-node/ # 基础节点组件
│ ├── comment/ # 注释组件
│ ├── group/ # 分组组件
│ ├── line-add-button/ # 连线添加按钮
│ ├── node-menu/ # 节点菜单
│ ├── node-panel/ # 节点添加面板
│ ├── selector-box-popover/ # 选择框弹窗
│ ├── sidebar/ # 侧边栏
│ ├── testrun/ # 测试运行组件
│ │ ├── hooks/ # 测试运行钩子
│ │ ├── node-status-bar/ # 节点状态栏
│ │ ├── testrun-button/ # 测试运行按钮
│ │ ├── testrun-form/ # 测试运行表单
│ │ ├── testrun-json-input/ # JSON输入组件
│ │ └── testrun-panel/ # 测试运行面板
│ └── tools/ # 工具组件
├── context/ # React Context
│ ├── node-render-context.ts # 当前渲染节点 Context
│ ├── sidebar-context # 侧边栏 Context
├── form-components/ # 表单组件库
│ ├── form-content/ # 表单内容
│ ├── form-header/ # 表单头部
│ ├── form-inputs/ # 表单输入
│ └── form-item/ # 表单项
│ └── feedback.tsx # 表单校验错误渲染
├── hooks/
│ ├── index.ts
│ ├── use-editor-props.tsx # 编辑器属性钩子
│ ├── use-is-sidebar.ts # 侧边栏状态钩子
│ ├── use-node-render-context.ts # 节点渲染上下文钩子
│ └── use-port-click.ts # 端口点击钩子
├── nodes/ # 节点定义
│ ├── index.ts
│ ├── constants.ts # 节点常量定义
│ ├── default-form-meta.ts # 默认表单元数据
│ ├── block-end/ # 块结束节点
│ ├── block-start/ # 块开始节点
│ ├── break/ # 中断节点
│ ├── code/ # 代码节点
│ ├── comment/ # 注释节点
│ ├── condition/ # 条件节点
│ ├── continue/ # 继续节点
│ ├── end/ # 结束节点
│ ├── group/ # 分组节点
│ ├── http/ # HTTP节点
│ ├── llm/ # LLM节点
│ ├── loop/ # 循环节点
│ ├── start/ # 开始节点
│ └── variable/ # 变量节点
├── plugins/ # 插件系统
│ ├── index.ts
│ ├── context-menu-plugin/ # 右键菜单插件
│ ├── runtime-plugin/ # 运行时插件
│ │ ├── client/ # 客户端
│ │ │ ├── browser-client/ # 浏览器客户端
│ │ │ └── server-client/ # 服务器客户端
│ │ └── runtime-service/ # 运行时服务
│ └── variable-panel-plugin/ # 变量面板插件
│ └── components/ # 变量面板组件
├── services/ # 服务层
│ ├── index.ts
│ └── custom-service.ts # 自定义服务
├── shortcuts/ # 快捷键系统
│ ├── index.ts
│ ├── constants.ts # 快捷键常量
│ ├── shortcuts.ts # 快捷键定义
│ ├── type.ts # 类型定义
│ ├── collapse/ # 折叠快捷键
│ ├── copy/ # 复制快捷键
│ ├── delete/ # 删除快捷键
│ ├── expand/ # 展开快捷键
│ ├── paste/ # 粘贴快捷键
│ ├── select-all/ # 全选快捷键
│ ├── zoom-in/ # 放大快捷键
│ └── zoom-out/ # 缩小快捷键
├── styles/ # 样式文件
├── typings/ # 类型定义
│ ├── index.ts
│ ├── json-schema.ts # JSON Schema类型
│ └── node.ts # 节点类型定义
└── utils/ # 工具函数
├── index.ts
└── on-drag-line-end.ts # 拖拽连线结束处理
```
### 关键目录功能说明
#### 1. `/components` - 组件库
* **base-node**: 所有节点的基础渲染组件
* **testrun**: 完整的测试运行功能模块,包含状态栏、表单、面板等
* **sidebar**: 侧边栏组件,提供工具和属性面板
* **node-panel**: 节点添加面板,支持拖拽添加新节点
#### 2. `/nodes` - 节点系统
每个节点类型都有独立的目录,包含:
* 节点注册信息 (`index.ts`)
* 表单元数据定义 (`form-meta.ts`)
* 节点特定的组件和逻辑
#### 3. `/plugins` - 插件系统
* **runtime-plugin**: 支持浏览器和服务器两种运行模式
* **context-menu-plugin**: 右键菜单功能
* **variable-panel-plugin**: 变量管理面板
#### 4. `/shortcuts` - 快捷键系统
完整的快捷键支持,包括:
* 基础操作:复制、粘贴、删除、全选
* 视图操作:放大、缩小、折叠、展开
* 每个快捷键都有独立的实现模块
## 应用架构设计
### 核心设计模式
#### 1. 插件化架构 (Plugin Architecture)
应用采用高度模块化的插件系统,每个功能都作为独立插件存在:
```typescript
plugins: () => [
createFreeLinesPlugin({ renderInsideLine: LineAddButton }),
createMinimapPlugin({ /* 配置 */ }),
createFreeSnapPlugin({ /* 对齐配置 */ }),
createFreeNodePanelPlugin({ renderer: NodePanel }),
createContainerNodePlugin({}),
createFreeGroupPlugin({ groupNodeRender: GroupNodeRender }),
createContextMenuPlugin({}),
createRuntimePlugin({ mode: 'browser' }),
createVariablePanelPlugin({})
]
```
#### 2. 节点注册系统 (Node Registry Pattern)
通过注册表模式管理不同类型的工作流节点:
```typescript
export const nodeRegistries: FlowNodeRegistry[] = [
ConditionNodeRegistry, // 条件节点
StartNodeRegistry, // 开始节点
EndNodeRegistry, // 结束节点
LLMNodeRegistry, // LLM节点
LoopNodeRegistry, // 循环节点
CommentNodeRegistry, // 注释节点
HTTPNodeRegistry, // HTTP节点
CodeNodeRegistry, // 代码节点
// ... 更多节点类型
];
```
#### 3. 依赖注入模式 (Dependency Injection)
使用 Inversify 框架实现服务的依赖注入:
```typescript
onBind: ({ bind }) => {
bind(CustomService).toSelf().inSingletonScope();
}
```
## 核心功能分析
### 1. 编辑器配置系统
`useEditorProps` 是整个编辑器的配置中心,包含:
```typescript
export function useEditorProps(
initialData: FlowDocumentJSON,
nodeRegistries: FlowNodeRegistry[]
): FreeLayoutProps {
return useMemo<FreeLayoutProps>(() => ({
background: true, // 背景网格
readonly: false, // 是否只读
initialData, // 初始数据
nodeRegistries, // 节点注册表
// 核心功能配置
playground: { preventGlobalGesture: true /* 阻止 mac 浏览器手势翻页 */ },
nodeEngine: { enable: true },
variableEngine: { enable: true },
history: { enable: true, enableChangeNode: true },
// 业务逻辑配置
canAddLine: (ctx, fromPort, toPort) => { /* 连线规则 */ },
canDeleteLine: (ctx, line) => { /* 删除连线规则 */ },
canDeleteNode: (ctx, node) => { /* 删除节点规则 */ },
canDropToNode: (ctx, params) => { /* 拖拽规则 */ },
// 插件配置
plugins: () => [/* 插件列表 */],
// 事件处理
onContentChange: debounce((ctx, event) => { /* 自动保存 */ }, 1000),
onInit: (ctx) => { /* 初始化 */ },
onAllLayersRendered: (ctx) => { /* 渲染完成 */ }
}), []);
}
```
### 2. 节点类型系统
应用支持多种工作流节点类型:
```typescript
export enum WorkflowNodeType {
Start = 'start', // 开始节点
End = 'end', // 结束节点
LLM = 'llm', // 大语言模型节点
HTTP = 'http', // HTTP请求节点
Code = 'code', // 代码执行节点
Variable = 'variable', // 变量节点
Condition = 'condition', // 条件判断节点
Loop = 'loop', // 循环节点
BlockStart = 'block-start', // 子画布开始节点
BlockEnd = 'block-end', // 子画布结束节点
Comment = 'comment', // 注释节点
Continue = 'continue', // 继续节点
Break = 'break', // 中断节点
}
```
每个节点都遵循统一的注册模式:
```typescript
export const StartNodeRegistry: FlowNodeRegistry = {
type: WorkflowNodeType.Start,
meta: {
isStart: true,
deleteDisable: true, // 不可删除
copyDisable: true, // 不可复制
nodePanelVisible: false, // 不在节点面板显示
defaultPorts: [{ type: 'output' }],
size: { width: 360, height: 211 }
},
info: {
icon: iconStart,
description: '工作流的起始节点,用于设置启动工作流所需的信息。'
},
formMeta, // 表单配置
canAdd() { return false; } // 不允许添加多个开始节点
};
```
### 3. 插件化架构
应用的功能通过插件系统实现模块化:
#### 核心插件列表
1. **FreeLinesPlugin** - 连线渲染和交互
2. **MinimapPlugin** - 缩略图导航
3. **FreeSnapPlugin** - 自动对齐和辅助线
4. **FreeNodePanelPlugin** - 节点添加面板
5. **ContainerNodePlugin** - 容器节点(如循环节点)
6. **FreeGroupPlugin** - 节点分组功能
7. **ContextMenuPlugin** - 右键菜单
8. **RuntimePlugin** - 工作流运行时
9. **VariablePanelPlugin** - 变量管理面板
### 4. 运行时系统
应用支持两种运行模式:
```typescript
createRuntimePlugin({
mode: 'browser', // 浏览器模式
// mode: 'server', // 服务器模式
// serverConfig: {
// domain: 'localhost',
// port: 4000,
// protocol: 'http',
// },
})
```
## 设计理念与架构优势
### 1. 高度模块化
* **插件化架构**: 每个功能都是独立插件,易于扩展和维护
* **节点注册系统**: 新节点类型可以轻松添加,无需修改核心代码
* **组件化设计**: UI组件高度复用职责清晰
### 2. 类型安全
* **完整的TypeScript支持**: 从配置到运行时的全链路类型保护
* **JSON Schema集成**: 节点数据结构通过Schema验证
* **强类型的插件接口**: 插件开发有明确的类型约束
### 3. 用户体验优化
* **实时预览**: 支持工作流的实时运行和调试
* **丰富的交互**: 拖拽、缩放、对齐、快捷键等完整的编辑体验
* **可视化反馈**: 缩略图、状态指示、连线动画等视觉反馈
### 4. 扩展性设计
* **开放的插件系统**: 第三方可以轻松开发自定义插件
* **灵活的节点系统**: 支持自定义节点类型和表单配置
* **多运行时支持**: 浏览器和服务器双模式运行
### 5. 性能优化
* **按需加载**: 组件和插件支持按需加载
* **防抖处理**: 自动保存等高频操作的性能优化
## 技术亮点
### 1. 自研编辑器框架
基于 `@flowgram.ai/free-layout-editor` 自研框架,提供:
* 自由布局的画布系统
* 完整的撤销/重做功能
* 节点和连线的生命周期管理
* 变量引擎和表达式系统
### 2. 先进的构建配置
使用 Rsbuild 作为构建工具:
```typescript
export default defineConfig({
plugins: [pluginReact(), pluginLess()],
source: {
entry: { index: './src/app.tsx' },
decorators: { version: 'legacy' } // 支持装饰器
},
tools: {
rspack: {
ignoreWarnings: [/Critical dependency/] // 忽略特定警告
}
}
});
```
### 3. 国际化支持
内置多语言支持:
```typescript
i18n: {
locale: navigator.language,
languages: {
'zh-CN': {
'Never Remind': '不再提示',
'Hold {{key}} to drag node out': '按住 {{key}} 可以将节点拖出',
},
'en-US': {},
}
}
```

1
docs/test/DEMO: Normal file
View File

@ -0,0 +1 @@
CHECK = SYS: FLOWLOG:

View File

@ -0,0 +1,19 @@
{
"name": "Branch A->COND->B/C (async)",
"code": "branch_async",
"design_json": {
"name": "Branch A->COND->B/C (async)",
"execution_mode": "async",
"nodes": [
{ "id": "A", "kind": "http", "name": "http A", "task": "http", "config": { "url": "https://httpbin.org/get", "method": "GET" } },
{ "id": "COND", "kind": "condition", "name": "cond", "task": "condition", "config": { "expression": { "left": { "type": "constant", "value": true }, "operator": "is_true", "right": { "type": "constant", "value": true } } }, "ports": { "yes": { "id": "yes" }, "no": { "id": "no" } } },
{ "id": "B", "kind": "http", "name": "http B", "task": "http", "config": { "url": "https://httpbin.org/anything/B", "method": "GET" } },
{ "id": "C", "kind": "http", "name": "http C", "task": "http", "config": { "url": "https://httpbin.org/anything/C", "method": "GET" } }
],
"edges": [
{ "from": "A", "to": "COND" },
{ "from": "COND", "to": "B", "condition": { "left": { "type": "constant", "value": true }, "operator": "is_true", "right": { "type": "constant", "value": true } } },
{ "from": "COND", "to": "C", "condition": { "left": { "type": "constant", "value": true }, "operator": "is_false", "right": { "type": "constant", "value": true } } }
]
}
}

Some files were not shown because too many files have changed in this diff Show More