提交 3abbf97a authored 作者: 宋宏伟's avatar 宋宏伟

update

上级 8dc73b9f
codes/etl/data-integration
**/*.csv
**/*.parquet
venv
*.csv
*.parquet
*.log
codes/etl/data-integration/
codes/transform/__pycache__/
.idea
.Rproj.user
## 添加nginx版本
FROM rocker/shiny:4.3.2
# 配置环境变量
ENV PYTHON_VER=3.8.18
ENV BASE_PATH='/opt'
ENV JAVA_HOME=${BASE_PATH}/jdk11
ENV PATH=${JAVA_HOME}/bin:$PATH
ENV KETTLE_HOME=${BASE_PATH}/in2-t2dm/config/etl
# 复制基础文件
WORKDIR ${BASE_PATH}
COPY . ${BASE_PATH}/in2-t2dm/
#添加系统repo镜像源
RUN echo "deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse\n\
deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse\n\
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse\n\
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse\n\
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse\n\
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse\n\
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse\n\
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse\n\
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse\n\
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse" > /etc/apt/sources.list
# 更新软件包列表,安装Python和nginx相关包,清理缓存
RUN apt-get update && \
apt-get install -y xz-utils build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libsqlite3-dev libreadline-dev libffi-dev curl libbz2-dev pkg-config make nginx cron&& \
rm -rf /var/lib/apt/lists/* && \
mkdir -p ${BASE_PATH}/in2-t2dm/
# 安装Python
COPY --from=registry.cn-hangzhou.aliyuncs.com/palan/in2-dependency:1.0 ${BASE_PATH}/Python-${PYTHON_VER}.tar.xz ${BASE_PATH}/
RUN tar -xf Python-${PYTHON_VER}.tar.xz && \
cd Python-${PYTHON_VER} && \
./configure --enable-optimizations --enable-shared && \
make -j 4 && \
make install && \
ldconfig ${BASE_PATH}/Python${PYTHON_VER} && \
pip3 install --no-cache-dir -i https://mirrors.aliyun.com/pypi/simple -r ${BASE_PATH}/in2-t2dm/codes/transform/requirements.txt && \
rm -rf ${BASE_PATH}/Python-${PYTHON_VER}.tar.xz ${BASE_PATH}/Python-${PYTHON_VER}
# 安装kettle的依赖环境java,解压kettle
COPY --from=registry.cn-hangzhou.aliyuncs.com/palan/in2-dependency:1.0 ${BASE_PATH}/OpenJDK11U-jdk_x64_linux_hotspot_11.0.9.1_1.tar.gz ${BASE_PATH}/
COPY --from=registry.cn-hangzhou.aliyuncs.com/palan/in2-dependency:1.0 ${BASE_PATH}/pdi-ce-9.4.0.0-343.zip ${BASE_PATH}/
RUN unzip -d ${BASE_PATH}/in2-t2dm/codes/etl/ pdi-ce-9.4.0.0-343.zip && \
rm -f ${BASE_PATH}/pdi-ce-9.4.0.0-343.zip && \
tar -zxvf ${BASE_PATH}/OpenJDK11U-jdk_x64_linux_hotspot_11.0.9.1_1.tar.gz -C ${BASE_PATH}/ && \
mv ${BASE_PATH}/jdk-11.0.9.1+1 ${BASE_PATH}/jdk11 && \
rm -rf ${BASE_PATH}/OpenJDK11U-jdk_x64_linux_hotspot_11.0.9.1_1.tar.gz
# 安装R包
RUN R -e "install.packages(c('pacman', 'here', 'rio', 'sp', 'shiny', 'shinydashboard', 'webshot', 'png', 'plotly', 'lubridate', 'showtext'),repos='https://mirrors.tuna.tsinghua.edu.cn/CRAN/')" && \
R -e "install.packages(c('textshaping','ragg','rvest','xml2','gtsummary','gt','arrow', 'tidyverse'))"
# 配置nginx相关的目录
RUN mv /etc/nginx/* ${BASE_PATH}/in2-t2dm/codes/nginx/ && \
rm -rf /etc/nginx && \
ln -s ${BASE_PATH}/in2-t2dm/codes/nginx /etc/nginx && \
mv ${BASE_PATH}/in2-t2dm/codes/nginx/nginx.conf ${BASE_PATH}/in2-t2dm/config/nginx/nginx.conf && \
ln -s ${BASE_PATH}/in2-t2dm/config/nginx/nginx.conf ${BASE_PATH}/in2-t2dm/codes/nginx/nginx.conf && \
mv /var/log/nginx/* ${BASE_PATH}/in2-t2dm/logs/nginx/ && \
rm -rf /var/log/nginx && \
ln -s ${BASE_PATH}/in2-t2dm/logs/nginx /var/log/nginx
.PHONY: fridaytools fridaytools-ali
fridaytools:
docker build . -t fridaytools:0.44
fridaytools-ali:
docker build . -t registry.cn-hangzhou.aliyuncs.com/palan/fridaytools:0.44
## 项目结构
```
in2-t2dm
├── codes [各类代码目录]
│   ├── admin [管理控制台代码,密码可以配置到外部的compsoe文件中]
│   ├── bash [各类调用的脚本和清理脚本]
│   ├── etl
│   ├── nginx [已安装软件未配置,容器内部的nginx代码,3837:/shiny/,80:/admin/]
│   ├── preprocess
│   ├── shiny
│   └── transform
├── config [各类配置文件目录]
│   ├── crontab
│   ├── etl [kettle file,密码不使用明文]
│   ├── nginx
│   ├── shiny
│   └── transform
├── data [数据目录]
│   ├── cleandata [通过标化代码处理后的数据]
│   ├── preprocessed 成的summary数据]
│   └── rawdata [通过kettle获取的数据]
├── docker-compose.yml
├── Dockerfile
├── docs
├── logs [各类日志文件]
│   ├── admin
│   ├── crontab
│   ├── etl
│   ├── nginx
│   ├── preprocess
│   ├── shiny
│   └── transform
├── Makefile
├── README.md
└── shinyserver
```
## 代码执行指令
```
# 执行etl处理程序
docker-compose up etl
# 执行transform处理程序
docker-compose up transform
# 执行数据预处理程序
docker-compose up preprocess
# 执行shiny展示程序
docker-compose up -d shiny
```
上述指令会调用./codes/bash/目录下的脚本,需要执行或部分调整的shell指令,可以在脚本中进行编辑。
## 项目更新与部署流程
### 代码合并
1. in-t2dm的各分支目录均与“项目结构保持一致”,开发工作在各分支进行。
2. 当发生代码变更后,会提交到master分支合并,经过审核后完成合并。
3. 在需要部署的环境中pull master分支的内容。
### 镜像更新
- 在项目目录中执行make fridaytools,会执行Makefile中配置的docker build指令;
- docker build会执行按照Dockerfile文件内容进行构建;
- 构建的过程中会使用到registry.cn-hangzhou.aliyuncs.com/palan/in2-dependency:1.0镜像,该镜像为jdk、kettle、python的安装包。详情见in2-dependency项目。
## 说明
### 软件版本和包说明
软件版本:
- R:4.3.2
- shiny: 1.5.22.1014
- Python:3.8.18
- kettle:9.4
- openjdk:11.0.9.1
- ubuntu:22.04
python包:
- pandas==2.0.3
- pyarrow==14.0.2
R包:
- pacman
- here
- rio
- sp
- shiny
- shinydashboard
- webshot
- png
- plotly
- lubridate
- showtext
- textshaping
- ragg
- rvest
- xml2
- gtsummary
- gt
- arrow
- tidyverse
### kettle数据库密码加密
加密指令如下:
```
[root@develop in2-t2dm]# docker run --rm fridaytools:0.43 bash -c "/opt/in2-t2dm/codes/etl//data-integration/encr.sh 123"
Encrypted 2be98afc86aa7f2e4cb79ce10bec3fd89
```
加密后的kettle.properties配置文件片段如下:
```
dt_postgresql_password=Encrypted 2be98afc86aa7f2e4cb79ce10bec3fd89
```
## 其他问题
* 文件命名规范问题,参照[Google 开源项目风格指南](https://zh-google-styleguide.readthedocs.io/en/latest/contents/)和Github知名开源项目。
* 文件名和目录默认要全部小写, 可以包含下划线 (`_`) 或连字符 (`-`), 依照项目的约定, 如果没有约定, 那么 “`_`” 更好。所有名称都必须使用半角字符。
* 特殊的文件大小写可以按照业界习惯来书写。例如:'README.md、Dockerfile、Makefile、\*.R'。
* 同一个项目中,相同类型的文件命名大小写和连接线必须保持一致,例如项目中生成的多份数据文件,文件命名风格必须统一。
* 所有文件名和目录命名不允许有其他符号,包括空格符。
* 需要考虑代码的可部署性。避免在项目中使用绝对路径,使用相对路径时,要在代码文件的顶部进行定义,便于进行调整和修改。
* 项目依赖包文件,便于部署时明确项目所使用的包和版本。需要提供项目所使用的依赖文件,在R中为'DESCRIPTION'文件,在Python中的'requirements.*txt*'文件,这类文件通常使用工具来生成。
* 项目代码使用git管理时,要注意使用'.gitignore'文件,避免本地数据文件或敏感文件被提交到线上。
# in2dm南京
\ No newline at end of file
#!/bin/bash
cd /opt/in2-t2dm/codes/etl/python/
python3 ETL_all.py
\ No newline at end of file
#!/bin/bash
cd /opt/in2-t2dm/codes/preprocess/R/
Rscript dataset_summary.R 2>&1 | tee -a /opt/in2-t2dm/logs/preprocess/preprocess_$(date +%Y%m%d).log
#!/bin/bash
service cron start
cd /opt/in2-t2dm/codes/shiny/
mv in2_t2dm_shiny_v0.1.R app.R
/init
#!/bin/bash
cd /opt/in2-t2dm/codes/transform/;export PYTHONPATH='/opt/in2-t2dm:$PYTHONPATH'
python3 data_governance.py
import time import time
from datetime import datetime, timedelta from datetime import datetime, timedelta
import logging from logging_config import setup_logging
from configparser import RawConfigParser
from ETL_diagnosis import * from ETL_diagnosis import *
from ETL_drug import * from ETL_drug import *
from ETL_lab import * from ETL_lab import *
from ETL_patient import * from ETL_patient import *
from ETL_visit import * from ETL_visit import *
from de_weight_csv import *
class BeijingTimeFormatter(logging.Formatter): # 调用 setup_logging 函数进行日志配置
"""Custom logging formatter to adjust log timestamps to Beijing time (UTC+8).""" setup_logging()
def formatTime(self, record, datefmt=None):
ct = datetime.fromtimestamp(record.created) + timedelta(hours=8) # Adjust for Beijing time
if datefmt:
s = ct.strftime(datefmt)
else:
try:
s = ct.isoformat(timespec='milliseconds')
except TypeError:
s = ct.isoformat()
return s
def format(self, record):
record.asctime = self.formatTime(record, self.datefmt)
return super(BeijingTimeFormatter, self).format(record)
# 创建日志处理器
file_handler = logging.FileHandler('etl_run.log')
stream_handler = logging.StreamHandler()
# 设置自定义的时间格式化器
formatter = BeijingTimeFormatter('%(asctime)s - %(levelname)s - %(message)s')
file_handler.setFormatter(formatter)
stream_handler.setFormatter(formatter)
# 设置日志配置,使用自定义的时间格式化器
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[file_handler, stream_handler]
)
def parse_date(date_string): def parse_date(date_string):
"""解析日期字符串为 datetime 对象,捕获并处理可能的格式错误""" """解析日期字符串为 datetime 对象,捕获并处理可能的格式错误"""
...@@ -48,6 +21,7 @@ def parse_date(date_string): ...@@ -48,6 +21,7 @@ def parse_date(date_string):
print(f"日期格式错误: {date_string}. 错误信息: {e}") print(f"日期格式错误: {date_string}. 错误信息: {e}")
raise # 向上抛出异常以停止处理 raise # 向上抛出异常以停止处理
def run_etl(pv_id, table_name, data_start_time, data_end_time): def run_etl(pv_id, table_name, data_start_time, data_end_time):
data_start_times = parse_date(data_start_time) data_start_times = parse_date(data_start_time)
data_end_times = parse_date(data_end_time) data_end_times = parse_date(data_end_time)
...@@ -66,47 +40,46 @@ def run_etl(pv_id, table_name, data_start_time, data_end_time): ...@@ -66,47 +40,46 @@ def run_etl(pv_id, table_name, data_start_time, data_end_time):
logging.info(log_message) logging.info(log_message)
if table_name == 'patient': if table_name == 'patient':
etl_patient(pv_id, data_start_time, data_end_time, data_start_times_utc, data_end_times_utc, etl_patient(data_start_time, data_end_time, data_start_times_utc, data_end_times_utc,
data_start_times_2_utc, data_end_times_2_utc) data_start_times_2_utc, data_end_times_2_utc, pv_id)
elif table_name == 'visit': elif table_name == 'visit':
etl_visit(pv_id, data_start_time, data_end_time, data_start_times_utc, data_end_times_utc, etl_visit(data_start_time, data_end_time, data_start_times_utc, data_end_times_utc,
data_start_times_2_utc, data_end_times_2_utc) data_start_times_2_utc, data_end_times_2_utc, pv_id)
elif table_name == 'prescribing': elif table_name == 'prescribing':
etl_prescribing(pv_id, data_start_time, data_end_time, data_start_times_utc, data_end_times_utc, etl_prescribing(data_start_time, data_end_time, data_start_times_utc, data_end_times_utc,
data_start_times_2_utc, data_end_times_2_utc) data_start_times_2_utc, data_end_times_2_utc, pv_id)
elif table_name == 'diagnosis': elif table_name == 'diagnosis':
etl_diagnosis(pv_id, data_start_time, data_end_time, data_start_times_utc, data_end_times_utc, etl_diagnosis(data_start_time, data_end_time, data_start_times_utc, data_end_times_utc,
data_start_times_2_utc, data_end_times_2_utc) data_start_times_2_utc, data_end_times_2_utc, pv_id)
elif table_name == 'lab': elif table_name == 'lab':
etl_lab_result_cm(pv_id, data_start_time, data_end_time, data_start_times_utc, data_end_times_utc, etl_lab_result_cm(data_start_time, data_end_time, data_start_times_utc, data_end_times_utc,
data_start_times_2_utc, data_end_times_2_utc) data_start_times_2_utc, data_end_times_2_utc, pv_id)
except Exception as e: except Exception as e:
print(f"ETL 任务执行失败: {e}") print(f"ETL 任务执行失败: {e}")
logging.error(f"ETL 任务执行失败: {e}") logging.error(f"ETL 任务执行失败: {e}")
date_ranges = [
["2021-01-01", "2021-07-01"],
["2021-07-01", "2022-01-01"],
["2022-01-01", "2022-07-01"],
["2022-07-01", "2023-01-01"],
["2023-01-01", "2023-07-01"],
["2023-07-01", "2024-01-01"],
["2024-01-01", "2024-07-01"],
["2024-07-01", "2024-10-01"],
]
pv_ids = ['320106426090445', '320104466002630', '320106466000838']
tables = ['patient', 'visit', 'prescribing', 'diagnosis', 'lab']
etl_start_time = time.time()
for pv_id in pv_ids:
for table_name in tables:
for start_date, end_date in date_ranges:
run_etl(pv_id, table_name, start_date, end_date)
etl_end_time = time.time() if __name__ == "__main__":
total_time = etl_end_time - etl_start_time conf_path = '../../../config/etl/etl_config.ini' # 数据提取配置文件
print(f"所有 ETL 任务共耗时: {total_time:.2f} 秒") config = RawConfigParser()
logging.info(f"所有 ETL 任务共耗时: {total_time:.2f} 秒") config.optionxform = str # 禁用键的小写转换
config.read(conf_path, encoding='utf-8')
# 获取机构id列表
pv_ids = eval(config.get('pv_ids', 'pv_ids'))
# 获取时间范围列表
date_ranges = eval(config.get('date_ranges', 'date_ranges'))
# 获取提取表格名称列表
tables = eval(config.get('tables', 'tables'))
etl_start_time = time.time()
for pv_id in pv_ids:
for table_name in tables:
for start_date, end_date in date_ranges:
run_etl(pv_id, table_name, start_date, end_date)
# 数据去重
deduplicate_csv_files()
etl_end_time = time.time()
total_time = etl_end_time - etl_start_time
print(f"所有 ETL 任务共耗时: {total_time:.2f} 秒")
logging.info(f"所有 ETL 任务共耗时: {total_time:.2f} 秒")
from test2 import *
from de_weight_csv import *
create_and_save_csv()
deduplicate_csv_files()
from data_query import * from data_query import *
def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_start_time2_utc,d_end_time2_utc): def etl_diagnosis(d_start_time, d_end_time, d_start_time_utc, d_end_time_utc, d_start_time2_utc, d_end_time2_utc,
output_path = r'./diagnosis.csv' pv_id):
queries = [ output_path = 'diagnosis.csv'
f""" base_queries = [
f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -53,11 +54,11 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc, ...@@ -53,11 +54,11 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,
AND diagnosis_time < TIMESTAMP '{d_end_time2_utc}' AND diagnosis_time < TIMESTAMP '{d_end_time2_utc}'
AND (diagnosis_name ~* '白内障|视网膜|眼|黄斑|玻璃体|玻血|网脱|失明|弱视|视力' AND (diagnosis_name ~* '白内障|视网膜|眼|黄斑|玻璃体|玻血|网脱|失明|弱视|视力'
OR diagnosis_name ~* '黄斑水肿|失明|眼球?萎缩|眼球?缺失|盲目(3|三)|(视力|视觉)重度|神经' OR diagnosis_name ~* '黄斑水肿|失明|眼球?萎缩|眼球?缺失|盲目(3|三)|(视力|视觉)重度|神经'
OR diagnosis_name ~* '截肢|切断|截断|截|蛋白尿|蛋白尿|肾?移植|透析?|尿毒症|CKD(5|Ⅴ|五)|肾.{0,4}终末|终末.{0,4}肾' OR diagnosis_name ~* '截肢|切断|截断|截|蛋白尿|蛋白尿|肾?移植|透析?|尿毒症|CKD(5|Ⅴ|五)|肾.{0, 4}终末|终末.{0, 4}肾'
OR diagnosis_name ~* '(颈|髂总|髂内|肾脏|肢端|腹主|肢|肾小)?主?动脉(粥样硬|痉挛|坏疽|硬|瘤|炎|栓塞|血栓)?化?|间歇性?跛行|红斑性肢痛|(伯|柏)格|雷诺氏|周围血管疾?病|动脉(肌纤维发育异|坏疽|痉挛)?|主动脉(瘤|炎)?|主动脉(粥样)?硬化|(静脉)?曲张|血栓性静脉|下肢(深静脉血栓|静脉曲张|动脉闭塞|血栓性静脉炎|静脉功能不全|静脉炎|(血管|动脉)闭塞症|静脉肌间血栓形成)?|周围循环' OR diagnosis_name ~* '(颈|髂总|髂内|肾脏|肢端|腹主|肢|肾小)?主?动脉(粥样硬|痉挛|坏疽|硬|瘤|炎|栓塞|血栓)?化?|间歇性?跛行|红斑性肢痛|(伯|柏)格|雷诺氏|周围血管疾?病|动脉(肌纤维发育异|坏疽|痉挛)?|主动脉(瘤|炎)?|主动脉(粥样)?硬化|(静脉)?曲张|血栓性静脉|下肢(深静脉血栓|静脉曲张|动脉闭塞|血栓性静脉炎|静脉功能不全|静脉炎|(血管|动脉)闭塞症|静脉肌间血栓形成)?|周围循环'
OR diagnosis_name ~* '冠(心病|状|脉)|旁路移植|搭桥|多支|PCI|心绞痛|动脉硬化.{0,3}心脏病|心(肌|脏)(缺|供)血|缺血性心(脏|肌)病|心肌?梗' OR diagnosis_name ~* '冠(心病|状|脉)|旁路移植|搭桥|多支|PCI|心绞痛|动脉硬化.{0, 3}心脏病|心(肌|脏)(缺|供)血|缺血性心(脏|肌)病|心肌?梗'
OR diagnosis_name ~* '心梗|心肌梗死|心痛|陈旧(性|型)?(心|ST|非ST|Q|前|侧|下|高|间|广泛|(左|右)心室)|心肌梗塞|胸痹' OR diagnosis_name ~* '心梗|心肌梗死|心痛|陈旧(性|型)?(心|ST|非ST|Q|前|侧|下|高|间|广泛|(左|右)心室)|心肌梗塞|胸痹'
OR diagnosis_name ~* '脑.{0,3}(梗|塞|死)|卒中|中风' OR diagnosis_name ~* '脑.{0, 3}(梗|塞|死)|卒中|中风'
OR diagnosis_name ~* '心(力|室|房)?衰(竭)?|心功能不全|心功能.*级|心源性哮喘|低心排综合征|KILLIP.*级|HBP|高血压' OR diagnosis_name ~* '心(力|室|房)?衰(竭)?|心功能不全|心功能.*级|心源性哮喘|低心排综合征|KILLIP.*级|HBP|高血压'
OR diagnosis_name ~* '血脂异常|(胆固醇|高脂|甘油三?(脂|酯))血症|高血脂|高粘血症|高(密度)?(酯|脂)蛋白|低(密度)?(酯|脂)蛋白|高?三酰甘油' OR diagnosis_name ~* '血脂异常|(胆固醇|高脂|甘油三?(脂|酯))血症|高血脂|高粘血症|高(密度)?(酯|脂)蛋白|低(密度)?(酯|脂)蛋白|高?三酰甘油'
OR diagnosis_name ~* '糖尿病') OR diagnosis_name ~* '糖尿病')
...@@ -79,7 +80,7 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc, ...@@ -79,7 +80,7 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,
join t6 b on a.visit_record_id = b.visit_id; join t6 b on a.visit_record_id = b.visit_id;
""", """,
f""" f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -129,11 +130,11 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc, ...@@ -129,11 +130,11 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,
AND diagnosis_time < TIMESTAMP '{d_end_time2_utc}' AND diagnosis_time < TIMESTAMP '{d_end_time2_utc}'
AND (diagnosis_name ~* '白内障|视网膜|眼|黄斑|玻璃体|玻血|网脱|失明|弱视|视力' AND (diagnosis_name ~* '白内障|视网膜|眼|黄斑|玻璃体|玻血|网脱|失明|弱视|视力'
OR diagnosis_name ~* '黄斑水肿|失明|眼球?萎缩|眼球?缺失|盲目(3|三)|(视力|视觉)重度|神经' OR diagnosis_name ~* '黄斑水肿|失明|眼球?萎缩|眼球?缺失|盲目(3|三)|(视力|视觉)重度|神经'
OR diagnosis_name ~* '截肢|切断|截断|截|蛋白尿|蛋白尿|肾?移植|透析?|尿毒症|CKD(5|Ⅴ|五)|肾.{0,4}终末|终末.{0,4}肾' OR diagnosis_name ~* '截肢|切断|截断|截|蛋白尿|蛋白尿|肾?移植|透析?|尿毒症|CKD(5|Ⅴ|五)|肾.{0, 4}终末|终末.{0, 4}肾'
OR diagnosis_name ~* '(颈|髂总|髂内|肾脏|肢端|腹主|肢|肾小)?主?动脉(粥样硬|痉挛|坏疽|硬|瘤|炎|栓塞|血栓)?化?|间歇性?跛行|红斑性肢痛|(伯|柏)格|雷诺氏|周围血管疾?病|动脉(肌纤维发育异|坏疽|痉挛)?|主动脉(瘤|炎)?|主动脉(粥样)?硬化|(静脉)?曲张|血栓性静脉|下肢(深静脉血栓|静脉曲张|动脉闭塞|血栓性静脉炎|静脉功能不全|静脉炎|(血管|动脉)闭塞症|静脉肌间血栓形成)?|周围循环' OR diagnosis_name ~* '(颈|髂总|髂内|肾脏|肢端|腹主|肢|肾小)?主?动脉(粥样硬|痉挛|坏疽|硬|瘤|炎|栓塞|血栓)?化?|间歇性?跛行|红斑性肢痛|(伯|柏)格|雷诺氏|周围血管疾?病|动脉(肌纤维发育异|坏疽|痉挛)?|主动脉(瘤|炎)?|主动脉(粥样)?硬化|(静脉)?曲张|血栓性静脉|下肢(深静脉血栓|静脉曲张|动脉闭塞|血栓性静脉炎|静脉功能不全|静脉炎|(血管|动脉)闭塞症|静脉肌间血栓形成)?|周围循环'
OR diagnosis_name ~* '冠(心病|状|脉)|旁路移植|搭桥|多支|PCI|心绞痛|动脉硬化.{0,3}心脏病|心(肌|脏)(缺|供)血|缺血性心(脏|肌)病|心肌?梗' OR diagnosis_name ~* '冠(心病|状|脉)|旁路移植|搭桥|多支|PCI|心绞痛|动脉硬化.{0, 3}心脏病|心(肌|脏)(缺|供)血|缺血性心(脏|肌)病|心肌?梗'
OR diagnosis_name ~* '心梗|心肌梗死|心痛|陈旧(性|型)?(心|ST|非ST|Q|前|侧|下|高|间|广泛|(左|右)心室)|心肌梗塞|胸痹' OR diagnosis_name ~* '心梗|心肌梗死|心痛|陈旧(性|型)?(心|ST|非ST|Q|前|侧|下|高|间|广泛|(左|右)心室)|心肌梗塞|胸痹'
OR diagnosis_name ~* '脑.{0,3}(梗|塞|死)|卒中|中风' OR diagnosis_name ~* '脑.{0, 3}(梗|塞|死)|卒中|中风'
OR diagnosis_name ~* '心(力|室|房)?衰(竭)?|心功能不全|心功能.*级|心源性哮喘|低心排综合征|KILLIP.*级|HBP|高血压' OR diagnosis_name ~* '心(力|室|房)?衰(竭)?|心功能不全|心功能.*级|心源性哮喘|低心排综合征|KILLIP.*级|HBP|高血压'
OR diagnosis_name ~* '血脂异常|(胆固醇|高脂|甘油三?(脂|酯))血症|高血脂|高粘血症|高(密度)?(酯|脂)蛋白|低(密度)?(酯|脂)蛋白|高?三酰甘油' OR diagnosis_name ~* '血脂异常|(胆固醇|高脂|甘油三?(脂|酯))血症|高血脂|高粘血症|高(密度)?(酯|脂)蛋白|低(密度)?(酯|脂)蛋白|高?三酰甘油'
OR diagnosis_name ~* '糖尿病') OR diagnosis_name ~* '糖尿病')
...@@ -154,9 +155,12 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc, ...@@ -154,9 +155,12 @@ def etl_diagnosis(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,
from t7 a from t7 a
join t6 b on a.visit_record_id = b.visit_id; join t6 b on a.visit_record_id = b.visit_id;
""" """
] ]
# 判断 pv_id 是否为空
# 调用函数 if not pv_id:
execute_queries_and_write_to_csv(queries, output_path) queries = [query.replace(f"WHERE organization_id = '{pv_id}'", "") for query in base_queries]
else:
queries = base_queries
# 调用函数
execute_queries_and_write_to_csv(queries, output_path)
from data_query import * from data_query import *
def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_start_time2_utc,d_end_time2_utc): def etl_prescribing(d_start_time, d_end_time, d_start_time_utc, d_end_time_utc, d_start_time2_utc, d_end_time2_utc,
output_path = r'./prescribing.csv' pv_id):
queries = [ output_path = 'prescribing.csv'
f""" base_queries = [
f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -73,7 +74,8 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut ...@@ -73,7 +74,8 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut
null as rx_end_datetime, null as rx_end_datetime,
a.dose as dosage_qty, a.dose as dosage_qty,
a.dose_unit_name as dosage_unit, a.dose_unit_name as dosage_unit,
a.frequency_code as frequency, a.frequency_code,
a.frequency_name,
a.qty as quantity, a.qty as quantity,
null as quantity_uom, null as quantity_uom,
a.route_name as roa, a.route_name as roa,
...@@ -84,7 +86,7 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut ...@@ -84,7 +86,7 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut
join t6 b on a.visit_record_id = b.visit_id; join t6 b on a.visit_record_id = b.visit_id;
""", """,
f""" f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -154,7 +156,8 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut ...@@ -154,7 +156,8 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut
a.end_time as rx_end_datetime, a.end_time as rx_end_datetime,
a.dose as dosage_qty, a.dose as dosage_qty,
a.dose_unit_name as dosage_unit, a.dose_unit_name as dosage_unit,
a.frequency_code as frequency, a.frequency_code,
a.frequency_name,
a.qty as quantity, a.qty as quantity,
null as quantity_uom, null as quantity_uom,
a.route_name as roa, a.route_name as roa,
...@@ -164,7 +167,11 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut ...@@ -164,7 +167,11 @@ def etl_prescribing(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ut
from t7 a from t7 a
join t6 b on a.visit_record_id = b.visit_id; join t6 b on a.visit_record_id = b.visit_id;
""" """
] ]
# 判断 pv_id 是否为空
# 调用函数 if not pv_id:
execute_queries_and_write_to_csv(queries, output_path) queries = [query.replace(f"WHERE organization_id = '{pv_id}'", "") for query in base_queries]
else:
queries = base_queries
# 调用函数
execute_queries_and_write_to_csv(queries, output_path)
from data_query import * from data_query import *
def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_start_time2_utc,d_end_time2_utc): def etl_lab_result_cm(d_start_time, d_end_time, d_start_time_utc, d_end_time_utc, d_start_time2_utc, d_end_time2_utc,
output_path = r'./lab_result_cm.csv' pv_id):
queries = [ output_path = 'lab_result_cm.csv'
f""" base_queries = [
f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -49,7 +50,8 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ ...@@ -49,7 +50,8 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_
select DISTINCT * from iceberg.cdm.lab_report_result select DISTINCT * from iceberg.cdm.lab_report_result
where (test_item_name ~* 'C肽|C-PR' and test_item_name ~* '空腹|1|60|2|120|3|180') where (test_item_name ~* 'C肽|C-PR' and test_item_name ~* '空腹|1|60|2|120|3|180')
or (test_item_name ~* '空腹|FPG|空腹血糖' and test_item_name ~* '血') or (test_item_name ~* '空腹|FPG|空腹血糖' and test_item_name ~* '血')
or (test_item_name ~* 'OGTT|耐量|负荷' and test_item_name ~* '2|120') or (test_item_name ~* 'OGTT|耐量|负荷' and test_item_name ~* '2|120'
or (test_item_name ~* 'HbA1c|糖化血红蛋白')
) )
select DISTINCT select DISTINCT
b.patient_id, b.patient_id,
...@@ -57,7 +59,6 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ ...@@ -57,7 +59,6 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_
b.patient_type, b.patient_type,
a.report_name as lab_name, a.report_name as lab_name,
c.test_item_name as lab_item_name, c.test_item_name as lab_item_name,
null as std_lab_item_name,
-- a.specimen_name as specimen_source, -- a.specimen_name as specimen_source,
null as specimen_source, null as specimen_source,
null as lab_order_datetime, null as lab_order_datetime,
...@@ -78,7 +79,7 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ ...@@ -78,7 +79,7 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_
join t7 c on c.result_source_id = a.report_source_id; join t7 c on c.result_source_id = a.report_source_id;
""", """,
f""" f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -124,7 +125,8 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ ...@@ -124,7 +125,8 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_
select DISTINCT * from iceberg.cdm.lab_report_result select DISTINCT * from iceberg.cdm.lab_report_result
where (test_item_name ~* 'C肽|C-PR' and test_item_name ~* '空腹|1|60|2|120|3|180') where (test_item_name ~* 'C肽|C-PR' and test_item_name ~* '空腹|1|60|2|120|3|180')
or (test_item_name ~* '空腹|FPG|空腹血糖' and test_item_name ~* '血') or (test_item_name ~* '空腹|FPG|空腹血糖' and test_item_name ~* '血')
or (test_item_name ~* 'OGTT|耐量|负荷' and test_item_name ~* '2|120') or (test_item_name ~* 'OGTT|耐量|负荷' and test_item_name ~* '2|120'
or (test_item_name ~* 'HbA1c|糖化血红蛋白')
) )
select DISTINCT select DISTINCT
b.patient_id, b.patient_id,
...@@ -132,7 +134,6 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ ...@@ -132,7 +134,6 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_
b.patient_type, b.patient_type,
a.report_name as lab_name, a.report_name as lab_name,
c.test_item_name as lab_item_name, c.test_item_name as lab_item_name,
null as std_lab_item_name,
a.specimen_name as specimen_source, a.specimen_name as specimen_source,
null as lab_order_datetime, null as lab_order_datetime,
a.specimen_collected_time as specimen_datetime, a.specimen_collected_time as specimen_datetime,
...@@ -151,7 +152,13 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_ ...@@ -151,7 +152,13 @@ def etl_lab_result_cm(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_
on a.visit_record_id = b.visit_id on a.visit_record_id = b.visit_id
join t7 c on c.result_source_id = a.report_source_id; join t7 c on c.result_source_id = a.report_source_id;
""" """
] ]
# 判断 pv_id 是否为空
if not pv_id:
queries = [query.replace(f"WHERE organization_id = '{pv_id}'", "") for query in base_queries]
else:
queries = base_queries
# 调用函数
execute_queries_and_write_to_csv(queries, output_path)
# 调用函数
execute_queries_and_write_to_csv(queries, output_path)
from data_query import * from data_query import *
def etl_patient(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_start_time2_utc,d_end_time2_utc): def etl_patient(d_start_time, d_end_time, d_start_time_utc, d_end_time_utc, d_start_time2_utc, d_end_time2_utc,
output_path = r'./patient.csv' pv_id):
queries = [ output_path = 'patient.csv'
f""" base_queries = [
f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -63,14 +64,14 @@ def etl_patient(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_ ...@@ -63,14 +64,14 @@ def etl_patient(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_
and a.patient_id = b.pat_base_id; and a.patient_id = b.pat_base_id;
""", """,
f""" f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
visit_record_id AS visit_id, visit_record_id AS visit_id,
organization_id AS provider_id organization_id AS provider_id
FROM iceberg.cdm.visit_record FROM iceberg.cdm.visit_record
WHERE organization_id = '{pv_id}' WHERE organization_id = '{pv_id}'
), ),
t2 AS (SELECT DISTINCT t2 AS (SELECT DISTINCT
visit_record_id AS visit_id visit_record_id AS visit_id
...@@ -118,7 +119,11 @@ def etl_patient(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_ ...@@ -118,7 +119,11 @@ def etl_patient(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_
on a.provider_id = b.organization_id on a.provider_id = b.organization_id
and a.patient_id = b.pat_base_id; and a.patient_id = b.pat_base_id;
""" """
] ]
# 判断 pv_id 是否为空
# 调用函数 if not pv_id:
execute_queries_and_write_to_csv(queries, output_path) queries = [query.replace(f"WHERE organization_id = '{pv_id}'", "") for query in base_queries]
else:
queries = base_queries
# 调用函数
execute_queries_and_write_to_csv(queries, output_path)
from data_query import * from data_query import *
def etl_visit(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_start_time2_utc,d_end_time2_utc): def etl_visit(d_start_time, d_end_time, d_start_time_utc, d_end_time_utc, d_start_time2_utc, d_end_time2_utc,
output_path = r'./visit.csv' pv_id):
queries = [ output_path = 'visit.csv'
f""" base_queries = [
f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -48,7 +49,7 @@ def etl_visit(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_st ...@@ -48,7 +49,7 @@ def etl_visit(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_st
JOIN t3 b ON a.visit_id = b.visit_id) JOIN t3 b ON a.visit_id = b.visit_id)
select * from t6; select * from t6;
""", """,
f""" f"""
WITH WITH
t1 AS (SELECT DISTINCT t1 AS (SELECT DISTINCT
pat_base_id AS patient_id, pat_base_id AS patient_id,
...@@ -93,7 +94,11 @@ def etl_visit(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_st ...@@ -93,7 +94,11 @@ def etl_visit(pv_id,d_start_time,d_end_time,d_start_time_utc,d_end_time_utc,d_st
select * from t6; select * from t6;
""" """
] ]
# 判断 pv_id 是否为空
# 调用函数 if not pv_id:
execute_queries_and_write_to_csv(queries, output_path) queries = [query.replace(f"WHERE organization_id = '{pv_id}'", "") for query in base_queries]
else:
queries = base_queries
# 调用函数
execute_queries_and_write_to_csv(queries, output_path)
...@@ -3,38 +3,21 @@ from datetime import datetime, timedelta ...@@ -3,38 +3,21 @@ from datetime import datetime, timedelta
from flightsql import FlightSQLClient from flightsql import FlightSQLClient
import pandas as pd import pandas as pd
import os import os
from logging_config import setup_logging
class BeijingTimeFormatter(logging.Formatter): # 调用 setup_logging 函数进行日志配置
"""自定义日志格式器,将日志时间戳调整为北京时间(UTC+8)。""" setup_logging()
def formatTime(self, record, datefmt=None):
bj_time = datetime.fromtimestamp(record.created) + timedelta(hours=8)
return bj_time.strftime('%Y-%m-%d %H:%M:%S')
# 创建日志处理器
file_handler = logging.FileHandler('etl_run.log')
stream_handler = logging.StreamHandler()
# 设置自定义格式器
formatter = BeijingTimeFormatter('%(asctime)s - %(levelname)s - %(message)s')
file_handler.setFormatter(formatter)
stream_handler.setFormatter(formatter)
# 使用自定义格式器配置日志
logging.basicConfig(
level=logging.INFO,
handlers=[file_handler, stream_handler]
)
def execute_query(sql_query): def execute_query(sql_query):
""" """
执行SQL查询并返回结果为pandas DataFrame。 执行SQL查询并返回结果为pandas DataFrame。
:param sql_query: str, SQL查询语句 :param sql_query: str, SQL查询语句
:return: 包含查询结果的pandas DataFrame :return: 包含查询结果的pandas DataFrame
""" """
try: try:
# 创建FlightSQLClient实例 # 创建FlightSQLClient实例
client = FlightSQLClient(host='192.168.101.45', port=50802, client = FlightSQLClient(host='192.168.101.45', port=50802,
insecure=True, disable_server_verification=True, token=True) insecure=True, disable_server_verification=True, token=True)
# 执行SQL查询并获取结果信息 # 执行SQL查询并获取结果信息
...@@ -56,30 +39,36 @@ def execute_query(sql_query): ...@@ -56,30 +39,36 @@ def execute_query(sql_query):
except Exception as e: except Exception as e:
logging.error(f"发生错误: {e}") logging.error(f"发生错误: {e}")
print(f"发生错误: {e}")
return None return None
def execute_queries_and_write_to_csv(queries, output_path):
def execute_queries_and_write_to_csv(queries, output_paths):
""" """
执行多个查询并将结果追加到CSV文件。 执行多个查询并将结果追加到CSV文件。
:param queries: SQL查询语句列表 :param queries: SQL查询语句列表
:param output_path: 输出CSV文件路径 :param output_paths: 输出CSV文件路径
""" """
base_paths = os.path.dirname(os.path.abspath(__file__))
output_path = os.path.join(base_paths, '..', '..', '..', 'data', 'rawdata', output_paths)
for i, query in enumerate(queries): for i, query in enumerate(queries):
df = execute_query(query) df = execute_query(query)
# 根据索引确定查询类型 # 根据索引确定查询类型
query_type = "门诊" if i == 0 else "住院" query_type = "门诊" if i == 0 else "住院"
# 检查DataFrame是否为空 # 检查DataFrame是否为空
if df is None or df.empty: if df is None or df.empty:
log_message = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - {output_path} {query_type} 查询无结果,跳过写入。" log_message = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - {output_path} {query_type} 查询无结果,跳过写入。"
logging.info(log_message) logging.info(log_message)
print(log_message)
continue # 跳过此次迭代 continue # 跳过此次迭代
# 记录行数并写入日志 # 记录行数并写入日志
log_message = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - {output_path} {query_type} 查询行数: {len(df)}" log_message = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - {output_path} {query_type} 查询行数: {len(df)}"
logging.info(log_message) logging.info(log_message)
print(log_message)
# 追加到CSV文件,检查文件是否存在以决定是否写入表头 # 追加到CSV文件,检查文件是否存在以决定是否写入表头
if not os.path.exists(output_path): if not os.path.exists(output_path):
...@@ -90,9 +79,11 @@ def execute_queries_and_write_to_csv(queries, output_path): ...@@ -90,9 +79,11 @@ def execute_queries_and_write_to_csv(queries, output_path):
# 记录成功写入 # 记录成功写入
log_message = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - 成功将{query_type}查询结果写入{output_path}, 行数: {len(df)}" log_message = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - 成功将{query_type}查询结果写入{output_path}, 行数: {len(df)}"
logging.info(log_message) logging.info(log_message)
print(log_message)
# 示例用法 # 示例用法
if __name__ == "__main__": if __name__ == "__main__":
queries = ["SELECT * FROM iceberg.cdm.outpatient_record LIMIT 10;"] queries = ["SELECT * FROM iceberg.cdm.outpatient_record LIMIT 10;"]
output_path = "./patient.csv" output_path = "patient.csv"
execute_queries_and_write_to_csv(queries, output_path) execute_queries_and_write_to_csv(queries, output_path)
\ No newline at end of file
import pandas as pd
import os
import logging
from logging_config import setup_logging
setup_logging()
logger = logging.getLogger(__name__)
def deduplicate_csv_files():
# 获取当前文件的绝对路径
base_path = os.path.dirname(os.path.abspath(__file__))
# 定义数据所在目录
data_dir = os.path.join(base_path, '..', '..', '..', 'data', 'rawdata')
try:
# 获取指定目录下所有的.csv文件
csv_files = [os.path.join(data_dir, f) for f in os.listdir(data_dir) if f.endswith('.csv')]
logger.info(f"在 {data_dir} 目录下找到 {len(csv_files)} 个 CSV 文件。")
for file in csv_files:
try:
# 读取CSV文件
df = pd.read_csv(file, low_memory=False)
# 去重
df_deduplicated = df.drop_duplicates()
logger.info(f"原始数据行数({file}): {len(df)}。去重后数据行数({file}): {len(df_deduplicated)}")
# 覆盖原始文件
df_deduplicated.to_csv(file, encoding='utf-8-sig', index=False)
logger.info(f"去重后的数据已覆盖原文件:{file}")
except Exception as e:
logger.error(f"处理文件 {file} 时发生错误: {e}", exc_info=True)
except Exception as e:
logger.error(f"在获取 CSV 文件列表时发生错误: {e}", exc_info=True)
\ No newline at end of file
import logging
from datetime import datetime, timedelta
import os
class BeijingTimeFormatter(logging.Formatter):
"""自定义日志格式器,将日志时间戳调整为北京时间(UTC+8)。"""
def formatTime(self, record, datefmt=None):
bj_time = datetime.fromtimestamp(record.created) + timedelta(hours=8)
return bj_time.strftime('%Y-%m-%d %H:%M:%S')
def setup_logging():
root_logger = logging.getLogger()
root_logger.setLevel(logging.INFO) # 设置日志级别
# 获取当前文件的绝对路径
base_path = os.path.dirname(os.path.abspath(__file__))
# 创建日志处理器
log_path = os.path.join(base_path, '..', '..', '..', 'logs', 'etl', 'etl_run.log')
file_handler = logging.FileHandler(log_path)
stream_handler = logging.StreamHandler()
# 创建格式化器
formatter = BeijingTimeFormatter('%(asctime)s - %(levelname)s - %(message)s')
# 为处理器设置格式化器
file_handler.setFormatter(formatter)
stream_handler.setFormatter(formatter)
# 避免重复添加处理器
if not any(isinstance(handler, logging.FileHandler) for handler in root_logger.handlers):
root_logger.addHandler(file_handler)
if not any(isinstance(handler, logging.StreamHandler) for handler in root_logger.handlers):
root_logger.addHandler(stream_handler)
\ No newline at end of file
import pandas as pd
import logging
from logging_config import setup_logging
import os
setup_logging()
logger = logging.getLogger(__name__)
def create_and_save_csv():
# 创建模拟数据的 DataFrame
data = {
'Name': ['Alice', 'Alice', 'Alice', 'Bob', 'Charlie'],
'Age': [25, 25, 25, 30, 35],
'City': ['New York', 'New York', 'New York', 'Los Angeles', 'Chicago']
}
df = pd.DataFrame(data)
# 获取当前文件的绝对路径
base_path = os.path.dirname(os.path.abspath(__file__))
# 指定 CSV 文件保存路径
csv_path = os.path.join(base_path, '..', '..', '..', 'data', 'rawdata', 'output2.csv')
print(csv_path)
df.to_csv(csv_path, index=False)
logger.info("执行成功")
\ No newline at end of file
import pandas as pd
# 读取CSV文件
file_path = 'lab_result_cm.csv' # 替换为你的CSV文件路径
df = pd.read_csv(file_path)
# 删除std_lab_item_name列
# axis=1指定我们要删除的是列,而不是行
# inplace=True表示直接在原DataFrame上进行修改,不创建新的DataFrame
df.drop('std_lab_item_name', axis=1, inplace=True)
# 将修改后的数据保存回原始CSV文件
# index=False表示在保存时不包括行索引,因为行索引在大多数情况下是不需要的
df.to_csv(file_path, index=False)
from data_query import *
queries = """
CREATE VIEW palan_view_procedure1 as
select DISTINCT b.visit_record_id as visit_id,a.operation_time from(
SELECT
case_base_id,
procedure_name,
operation_time
FROM iceberg.cdm.case_operation
where procedure_name ~* '鼻息肉摘除|鼻内窥镜|鼻窦手' -- 根据真实数据调整 这个条件要放宽看下有那些关于鼻子的手术名称
and operation_time >= TIMESTAMP '2019-01-01 00:00:00+00:00'
and operation_time < TIMESTAMP '2021-12-31 00:00:00+00:00') a
join
(select visit_record_id,case_base_id from iceberg.cdm.case_base
where discharge_time >= TIMESTAMP '2018-11-01 00:00:00+00:00'
AND discharge_time < TIMESTAMP '2022-02-28 00:00:00+00:00') b
on a.case_base_id = b.case_base_id;
[sql:query]
-- 取出就诊对应的患者id和最早一次(2019-2021)手术时间。
CREATE VIEW palan_view_procedure2 as
SELECT DISTINCT
a.pat_base_id AS patient_id,
min(b.operation_time) as min_operation_time
FROM iceberg.cdm.visit_record a join palan_view_procedure1 b on a.visit_record_id = b.visit_id
group by a.pat_base_id;
[sql:query]
-- 关联患者表限制手术时年龄
CREATE VIEW palan_view_procedure3 as
select distinct a.pat_base_id AS patient_id,b.min_operation_time from
iceberg.cdm.patient_base_info a join palan_view_procedure2 b on a.pat_base_id = b.patient_id
WHERE (DATEDIFF(day, a.date_of_birth, b.min_operation_time) / 365.25) >= 12; -- 根据情况调整
select* from palan_view_procedure3;
"""
# 调用函数
a = execute_query(queries)
print(a)
# Global options
options(scipen = 1000)
options(scipen = 1, digits = 2)
options(encoding = 'UTF-8')
library(tidyverse)
library(here)
library(rio)
library(readxl)
library(janitor)
library(gtsummary)
library(survival)
library(officer)
library(officedown)
library(flextable)
library(zoo)
# library(eoffice)
library(tableone)
library(plotly)
library(reticulate)
calc_age <- function(birthDate, refDate = Sys.Date(), unit = "year") {
require(lubridate)
if (grepl(x = unit, pattern = "year")) {
as.period(interval(birthDate, refDate), unit = 'year')$year
} else if (grepl(x = unit, pattern = "month")) {
as.period(interval(birthDate, refDate), unit = 'month')$month
} else if (grepl(x = unit, pattern = "week")) {
floor(as.period(interval(birthDate, refDate), unit = 'day')$day / 7)
} else if (grepl(x = unit, pattern = "day")) {
as.period(interval(birthDate, refDate), unit = 'day')$day
} else {
print("Argument 'unit' must be one of 'year', 'month', 'week', or 'day'")
NA
}
}
center_par <- fp_par(text.align = "center",padding = 10)
bold_face <- shortcuts$fp_bold(font.size = 20)
toc <- fpar(
ftext("目 录", prop = bold_face ),
fp_p = center_par)
LANDSCAPE_STOP <-
block_section(
prop_section(
page_size = page_size(orient = "landscape"),
type = "continuous"
)
)
theme_gtsummary_language(
language = "zh-cn",
decimal.mark = NULL,
set_theme = TRUE
)
bold_face1 <- fp_text(font.size = 25, bold = TRUE,
font.family = "Times New Roma")
bold_face2 <- fp_text(font.size = 20, font.family = "宋体")
# Global options
options(scipen = 1000)
options(scipen = 1, digits = 2)
options(encoding = 'UTF-8')
library(here)
source(here("codes","preprocess","R","target_group.R"))
# 基线 -------------------------------------------------------------------------
## 人口学 ####
demo_bl <- target_group %>%
left_join(patient_clean) %>%
mutate(age = calc_age(birth_date, index_date),
age_cat = case_when(age < 18 ~ "< 18",
age >= 18 & age < 30 ~ "18-29",
age >= 30 & age < 40 ~ "30-39",
age >= 40 & age < 50 ~ "40-49",
age >= 50 & age < 60 ~ "50-59",
age >= 60 & age < 70 ~ "60-69",
age >= 70 ~ "≥70")) %>%
select(patient_id, index_date, index_ym, subgroup, sex = std_sex, age, age_cat)
## 并发症 ####
complication_bl <- target_group %>%
left_join(diag_clean) %>%
filter(diagnosis_datetime <= index_date & diagnosis_datetime >= index_date - 3*30.5) %>%
select(patient_id, index_date, index_ym, subgroup, dx_cat, std_dx_desc) %>%
filter(!(std_dx_desc == "Dyslipidemia"|std_dx_desc == "Hypertension")) %>%
distinct() %>%
drop_na(std_dx_desc)
complication_wide_bl <- target_group %>%
left_join(complication_bl) %>%
select(-dx_cat) %>%
mutate(n = 1) %>%
pivot_wider(names_from = c(std_dx_desc),
values_from = n,
values_fill = 0) %>%
select(-`NA`)
complication_cat_wide_bl <- target_group %>%
left_join(complication_bl) %>%
select(-std_dx_desc) %>%
mutate(n = 1) %>%
distinct() %>%
pivot_wider(names_from = c(dx_cat),
values_from = n,
values_fill = 0) %>%
select(-`NA`)
## 共病 ####
comorbidity_wide_bl <- target_group %>%
left_join(diag_clean) %>%
left_join(visit_clean) %>%
select(patient_id, index_date, index_ym, subgroup, visit_id,
admission_datetime, std_dx_desc) %>%
filter(std_dx_desc == "Dyslipidemia"|std_dx_desc == "Hypertension") %>%
distinct() %>%
mutate(n = 1) %>%
pivot_wider(names_from = std_dx_desc,
values_from = n,
values_fill = 0) %>%
mutate(gap = abs(admission_datetime - index_date)) %>%
group_by(patient_id) %>%
arrange(gap) %>%
slice_head(n = 1) %>%
ungroup() %>%
right_join(target_group) %>%
replace_na(list(Dyslipidemia = 0, Hypertension = 0)) %>%
#filter(diagnosis_datetime <= index_date & diagnosis_datetime >= index_date - 3*30.5) %>%
select(patient_id, index_date, index_ym, subgroup, Dyslipidemia, Hypertension) %>%
distinct()
comorbidity_bl <- target_group %>%
left_join(comorbidity_wide_bl) %>%
pivot_longer(cols = c(Dyslipidemia, Hypertension),
names_to = "std_dx_desc",
values_to = "n") %>%
filter(n == 1) %>%
select(patient_id, index_date, index_ym, subgroup, std_dx_desc)
## 实验室检查 ####
# 全人群lab
a1c_bl <- target_group %>%
left_join(lab_clean) %>%
filter(lab_item_name == "HbA1c") %>%
filter(result_datetime <= index_date & result_datetime >= index_date - 3*30.5) %>%
group_by(patient_id) %>%
arrange(result_datetime) %>%
slice_tail(n = 1) %>%
ungroup() %>%
mutate(lab_result_cat = case_when(lab_result >= 4 & lab_result < 7 ~ "[4, 7)",
lab_result >= 7 ~ "[7, +)",
TRUE ~ NA))
lab_bl <- target_group %>%
left_join(lab_clean) %>%
filter(lab_item_name == "FPG"|lab_item_name == "P2hPG") %>%
filter(result_datetime <= index_date & result_datetime >= index_date - 30.5) %>%
group_by(patient_id) %>%
arrange(result_datetime) %>%
slice_tail(n = 1) %>%
ungroup() %>%
mutate(lab_result_cat = case_when(lab_item_name == "FPG" & lab_result >= 3.9 & lab_result < 6 ~ "[3.9, 6.0)",
lab_item_name == "FPG" & lab_result >= 6 & lab_result < 7 ~ "[6.0, 7.0)",
lab_item_name == "FPG" & lab_result >= 7 ~ "[7.0, +)",
lab_item_name == "P2hPG" & lab_result >= 0 & lab_result < 7.8 ~ "[0, 7.8)",
lab_item_name == "P2hPG" & lab_result >= 7.8 & lab_result < 11.1 ~ "[7.8, 11.1)",
lab_item_name == "P2hPG" & lab_result >= 11.1 ~ "[11.1, +)",
TRUE ~ NA)) %>%
bind_rows(a1c_bl) %>%
select(patient_id, index_date, index_ym, subgroup, lab_item_name, lab_result, lab_result_cat) %>%
drop_na(lab_result)
lab_wide_bl <- lab_bl %>%
select(patient_id, index_date, index_ym, subgroup, lab_item_name, lab_result, lab_result_cat) %>%
pivot_wider(names_from = lab_item_name,
values_from = c(lab_result, lab_result_cat),
names_glue = "{lab_item_name}_{.value}",
values_fill = list(lab_result = NA, lab_result_cat = NA)) %>%
rename_with( ~ gsub("_lab_result", "", .x, fixed = TRUE)) %>%
right_join(target_group)
## 用药 ####
# bl_rx <- target_group %>%
# left_join(rx_clean, join_by(patient_id, index_visit_id == visit_id)) %>%
# mutate(rx_desc = if_else(new_rx_desc == "Others", "Others", rx_desc)) %>%
# select(patient_id, index_ym, rx_index_date = order_datetime, bl_rx_desc = rx_desc, bl_rx_cat = rx_cat) %>%
# distinct()
rx_bl <- target_group %>%
left_join(rx_clean) %>%
filter(order_datetime >= index_date) %>%
drop_na(rx_desc) %>%
group_by(patient_id) %>%
arrange(rx_cat) %>%
slice_head(n = 1) %>%
ungroup() %>%
right_join(target_group) %>%
# mutate(rx_desc = if_else(new_rx_desc == "Others", "Others", rx_desc)) %>%
select(patient_id, index_date, index_ym, subgroup, rx_index_date = order_datetime, bl_rx_desc = std_rx_desc,
bl_rx_cat = rx_cat, bl_coverage_time = coverage_time) %>%
distinct()
# 随访 -------------------------------------------------------------------------
## 并发症 ####
complication_fu <- target_group %>%
left_join(diag_clean) %>%
filter(diagnosis_datetime > index_date) %>%
filter(!(std_dx_desc == "Dyslipidemia"|std_dx_desc == "Hypertension")) %>%
select(patient_id, index_ym, subgroup, std_dx_desc, dx_cat) %>%
distinct() %>%
drop_na(std_dx_desc)
complication_wide_fu <- target_group %>%
left_join(complication_fu) %>%
select(-dx_cat) %>%
mutate(n = 1) %>%
group_by(patient_id, index_ym) %>%
pivot_wider(names_from = std_dx_desc,
values_from = n,
values_fill = 0) %>%
ungroup() %>%
select(-`NA`)
complication_cat_wide_fu <- target_group %>%
left_join(complication_fu) %>%
select(-std_dx_desc) %>%
mutate(n = 1) %>%
distinct() %>%
pivot_wider(names_from = dx_cat,
values_from = n,
values_fill = 0) %>%
select(-`NA`)
## 低血糖事件 ####
hypoglycemia_fu <- target_group %>%
left_join(diag_clean) %>%
filter(hypoglycemia == 1 & (grepl("住院", patient_type))) %>%
filter(diagnosis_datetime > index_date) %>%
select(patient_id, index_date, index_ym, subgroup, std_dx_desc, diagnosis_datetime) %>%
distinct() %>%
drop_na(std_dx_desc) %>%
mutate(hypoglycemia = "severe hypoglycemia") %>%
mutate(fu_quarter = ceiling(as.numeric(difftime(diagnosis_datetime, index_date, units = "days"))/30.5/3)) %>%
select(patient_id, index_date, index_ym, subgroup, diagnosis_datetime, fu_quarter, hypoglycemia)
## 共病 ####
comorbidity_fu <- target_group %>%
left_join(diag_clean) %>%
filter(std_dx_desc == "Dyslipidemia"|std_dx_desc == "Hypertension") %>%
filter(diagnosis_datetime > index_date) %>%
select(patient_id, index_date, index_ym, subgroup, std_dx_desc) %>%
distinct() %>%
drop_na(std_dx_desc)
comorbidity_wide_fu <- target_group %>%
left_join(comorbidity_fu) %>%
mutate(n = 1) %>%
group_by(patient_id, index_ym) %>%
pivot_wider(names_from = std_dx_desc,
values_from = n,
values_fill = 0) %>%
ungroup() %>%
select(-`NA`)
## 实验室检查 ####
# 全人群lab
lab_fu <- target_group %>%
left_join(lab_clean) %>%
filter(result_datetime > index_date) %>%
mutate(
fu_period = ceiling(as.numeric(difftime(result_datetime, index_date, units = "days"))/30.5),
fu_quarter = ceiling(as.numeric(difftime(result_datetime, index_date, units = "days"))/30.5/3),
fu_year = ceiling(as.numeric(difftime(result_datetime, index_date, units = "days"))/365.25)) %>%
mutate(lab_unnormal = case_when(lab_item_name == "FPG" & lab_result >= 7 ~ 1,
lab_item_name == "P2hPG" & lab_result >= 11.1 ~ 1,
lab_item_name == "HbA1c" & lab_result >= 7 ~ 1,
TRUE ~ NA),
lab_FPG_normal = case_when(lab_item_name == "FPG" & lab_result < 7 ~ 1,
TRUE ~ NA),
lab_HbA1c_normal = case_when(lab_item_name == "HbA1c" & lab_result < 7 ~ 1,
TRUE ~ NA),
) %>%
drop_na(lab_result) %>%
select(patient_id, index_date, index_ym, subgroup, result_datetime,
fu_period, fu_quarter, fu_year, visit_id, lab_item_name, lab_result,
lab_unnormal, lab_FPG_normal, lab_HbA1c_normal)
cp_wide_fu <- target_group %>%
left_join(lab_fu) %>%
filter(grepl("CP", lab_item_name)) %>%
group_by(patient_id, visit_id, lab_item_name) %>%
arrange(lab_result) %>%
slice_tail(n = 1) %>%
ungroup() %>%
group_by(patient_id, visit_id) %>%
pivot_wider(names_from = lab_item_name,
values_from = lab_result,
values_fill = NA) %>%
ungroup() %>%
mutate(# uncontrol = case_when(FPG >= 7 ~ 1,
# P2hPG >= 11.1 ~ 1,
# HbA1c >= 7 ~ 1,
# TRUE ~ NA),
# control = case_when(FPG < 7 ~ 1,
# HbA1c < 7 ~ 1,
# TRUE ~ NA),
cp_abnormal = case_when(FCP > 1 ~ 1,
`1hCP` > 2.5|`1hCP` > 5*FCP ~ 1,
`2hCP` > 5*FCP ~ 1,
`3hCP` > 1 ~ 1,
TRUE ~ NA))
# lab_month_fu <- lab_fu %>%
# group_by(patient_id, index_ym, lab_item_name, fu_period) %>%
# arrange(lab_result) %>%
# slice_head(n = 1) %>%
# ungroup() %>%
# select(patient_id, index_ym, fu_period, lab_item_name, lab_result)
lab_quarter_fu <- lab_fu %>%
group_by(patient_id, index_ym, lab_item_name, fu_quarter) %>%
arrange(result_datetime) %>%
slice_tail(n = 1) %>%
ungroup() %>%
select(patient_id, index_date, index_ym, subgroup, fu_period, fu_quarter,
lab_item_name, lab_result)
## 体征 ####
vital_fu_mid <- target_group %>%
left_join(vital_clean) %>%
filter(measure_datetime > index_date) %>%
mutate(
fu_period = ceiling(as.numeric(difftime(measure_datetime, index_date, units = "days"))/30.5),
fu_quarter = ceiling(as.numeric(difftime(measure_datetime, index_date, units = "days"))/30.5/3),
fu_year = ceiling(as.numeric(difftime(measure_datetime, index_date, units = "days"))/365.25)) %>%
select(patient_id, index_date, index_ym, subgroup, measure_datetime,
fu_period, fu_quarter, fu_year, visit_id, item_name, item_value) %>%
pivot_wider(names_from = item_name,
values_from = item_value,
values_fill = NA)
vital_fu <- vital_fu_mid %>%
mutate(收缩压 = ifelse(!"收缩压" %in% names(vital_fu_mid), NaN, 收缩压),
舒张压 = ifelse(!"舒张压" %in% names(vital_fu_mid), NaN, 舒张压)) %>%
mutate(hypertension = case_when(收缩压 > 140 & 舒张压 > 90 ~ "both",
收缩压 > 140 ~ "systolic",
舒张压 > 90 ~ "diastoli"))
# 赛诺菲定制 -------------------------------------------------------------------
## 断药 ####
rx_discontinue <- target_group %>%
left_join(rx_bl) %>%
# filter(subgroup == "Drug Naive") %>%
left_join(rx_clean) %>%
filter(order_datetime >= rx_index_date) %>%
filter(bl_rx_desc == std_rx_desc) %>%
group_by(patient_id) %>%
arrange(order_datetime) %>%
mutate(gap = abs(order_datetime - lead(order_datetime)),
discontinue_date = case_when((gap > 90) | is.na(gap) ~ order_datetime + coverage_time,
TRUE ~ NA)) %>%
filter(!is.na(discontinue_date)) %>%
arrange(discontinue_date) %>%
slice_head(n = 1) %>%
ungroup() %>%
rename(discontinue_rx_desc = std_rx_desc) %>%
select(patient_id, index_date, index_ym, subgroup, discontinue_date) %>%
drop_na(discontinue_date) %>%
distinct()
## 未受控 ####
lab_uncontrol <- target_group %>%
left_join(rx_discontinue) %>%
left_join(lab_clean %>%
filter(lab_item_name == "HbA1c")) %>%
filter(result_datetime <= discontinue_date & result_datetime >= index_date) %>%
group_by(patient_id) %>%
arrange(result_datetime) %>%
slice_tail(n = 1) %>%
ungroup() %>%
filter(lab_result >= 7) %>%
rename(uncontrol_datetime = result_datetime) %>%
select(patient_id, index_date, index_ym, subgroup,
uncontrol_visit_id = visit_id, uncontrol_datetime) %>%
mutate(uncontrol = 1) %>%
right_join(target_group) %>%
replace_na(list(uncontrol = 0))
rx_reorder <- rx_discontinue %>%
left_join(lab_uncontrol) %>%
filter(uncontrol == 1) %>%
left_join(rx_clean) %>%
filter(order_datetime > discontinue_date) %>%
# drop_na(new_rx_desc) %>%
group_by(patient_id) %>%
arrange(order_datetime, rx_cat) %>%
slice_head(n = 1) %>%
ungroup() %>%
right_join(target_group) %>%
mutate(std_rx_desc = replace_na(std_rx_desc, "None"),
rx_cat = replace_na(rx_cat, "None"))%>%
select(patient_id, index_date, index_ym, subgroup, reorder_visit_id = visit_id,
reorder_rx_desc = std_rx_desc, reorder_rx_cat = rx_cat, reorder_datetime = order_datetime) %>%
distinct()
## 换方案后实验室检查 ####
lab_quarter_fu_reorder <- rx_reorder %>%
left_join(lab_fu) %>%
filter(result_datetime > reorder_datetime) %>%
mutate(
reorder_fu_period = ceiling(as.numeric(difftime(result_datetime, reorder_datetime, units = "days"))/30.5),
reorder_fu_quarter = ceiling(as.numeric(difftime(result_datetime, reorder_datetime, units = "days"))/30.5/3),
reorder_fu_year = ceiling(as.numeric(difftime(result_datetime, reorder_datetime, units = "days"))/365.25)) %>%
drop_na(lab_result) %>%
select(patient_id, reorder_datetime, index_ym, subgroup, result_datetime, reorder_rx_cat,
reorder_fu_period, reorder_fu_quarter, reorder_fu_year, visit_id, lab_item_name, lab_result) %>%
group_by(patient_id, index_ym, lab_item_name, reorder_fu_quarter, reorder_rx_cat) %>%
arrange(result_datetime) %>%
slice_tail(n = 1) %>%
ungroup() %>%
select(patient_id, reorder_fu_quarter, index_ym, subgroup, reorder_rx_cat,
reorder_fu_period, reorder_fu_quarter, lab_item_name, lab_result)
hypoglycemia_fu_reorder <- rx_reorder %>%
left_join(hypoglycemia_fu) %>%
drop_na(hypoglycemia) %>%
mutate(reorder_fu_quarter = ceiling(as.numeric(difftime(diagnosis_datetime, reorder_datetime, units = "days"))/30.5/3)) %>%
drop_na(reorder_fu_quarter) %>%
select(patient_id, reorder_rx_cat, reorder_fu_quarter, index_ym, subgroup, hypoglycemia)
save.image(here("data", "clean", "dataset_for_analysis.RData"))
# Global options
options(scipen = 1000)
options(scipen = 1, digits = 2)
options(encoding = 'UTF-8')
library(here)
source(here("codes","preprocess","R","common.R"))
source(here("codes","preprocess","R","dataset_for_analysis.R"))
# load(here("data", "clean", "dataset_for_analysis.RData"))
# date_seq <- format(seq.Date(from = as.Date("2022-01-01",format = "%Y-%m-%d"), by = "month", length.out = 36), format = "%Y-%m")
# 根据数据的日期动态变化
date_seq <- visit_clean %>%
arrange(admission_datetime) %>%
select(admission_datetime) %>%
mutate(admission_datetime = format(admission_datetime, format = "%Y-%m")) %>%
distinct() %>%
pull(admission_datetime)
# 研究人群 ####
endpoint_subgroup <- target_group %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup)) %>%
group_by(index_ym, subgroup, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(endpoint_subgroup, "../../../data/preprocessed/endpoint_subgroup.csv")
# 初治方案 ####
endpoint_first_regimen <- target_group %>%
left_join(rx_bl) %>%
select(patient_id, index_ym, subgroup, bl_rx_desc, bl_rx_cat)
## 初诊患者方案 ####
bl_first_regimen_naive <- endpoint_first_regimen %>%
drop_na(bl_rx_cat) %>%
filter(subgroup == "Drug Naive") %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
bl_rx_cat = as.factor(bl_rx_cat)) %>%
group_by(index_ym, bl_rx_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_first_regimen_naive, "../../../data/preprocessed/bl_first_regimen_naive.csv")
## 非初诊患者方案 ####
bl_first_regimen_revisit <- endpoint_first_regimen %>%
drop_na(bl_rx_cat) %>%
filter(subgroup == "Revisit") %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
bl_rx_cat = as.factor(bl_rx_cat)) %>%
group_by(index_ym, bl_rx_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_first_regimen_revisit, "../../../data/preprocessed/bl_first_regimen_revisit.csv")
# 人群特征 ####
## 年龄 性别 ####
bl_demo_summary <- demo_bl %>%
select(patient_id, index_ym, sex, age = age_cat) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
age = as.factor(age),
sex = as.factor(sex)) %>%
group_by(index_ym, sex, age, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_demo_summary, "../../../data/preprocessed/bl_demo_summary.csv")
## 共病情况 ####
# 基线
bl_comorbidity_summary <- comorbidity_bl %>%
select(index_ym, std_dx_desc) %>%
drop_na(std_dx_desc) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
std_dx_desc = as.factor(std_dx_desc)) %>%
group_by(index_ym, std_dx_desc, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
# 随访
fu_comorbidity_summary <- comorbidity_fu %>%
select(index_ym, std_dx_desc) %>%
drop_na(std_dx_desc) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
std_dx_desc = as.factor(std_dx_desc)) %>%
group_by(index_ym, std_dx_desc, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_comorbidity_summary, "../../../data/preprocessed/bl_comorbidity_summary.csv")
write_csv(fu_comorbidity_summary, "../../../data/preprocessed/fu_comorbidity_summary.csv")
## 并发症情况 ####
# 基线
bl_complication_summary <- complication_bl %>%
select(index_ym, std_dx_desc) %>%
drop_na(std_dx_desc) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
std_dx_desc = as.factor(std_dx_desc)) %>%
group_by(index_ym, std_dx_desc, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
bl_complication_cat_summary <- complication_bl %>%
select(index_ym, dx_cat) %>%
drop_na(dx_cat) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
dx_cat = as.factor(dx_cat)) %>%
group_by(index_ym, dx_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
# 随访
fu_complication_summary <- complication_fu %>%
select(index_ym, std_dx_desc) %>%
drop_na(std_dx_desc) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
std_dx_desc = as.factor(std_dx_desc)) %>%
group_by(index_ym, std_dx_desc, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
fu_complication_cat_summary <- complication_fu %>%
select(index_ym, dx_cat) %>%
drop_na(dx_cat) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
dx_cat = as.factor(dx_cat)) %>%
group_by(index_ym, dx_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_complication_summary, "../../../data/preprocessed/bl_complication_summary.csv")
write_csv(bl_complication_cat_summary, "../../../data/preprocessed/bl_complication_cat_summary.csv")
write_csv(fu_complication_summary, "../../../data/preprocessed/fu_complication_summary.csv")
write_csv(fu_complication_cat_summary, "../../../data/preprocessed/fu_complication_cat_summary.csv")
## 血糖控制 ####
# 检查指标
bl_lab_summary <- lab_wide_bl %>%
mutate(FPG = ifelse(!"FPG" %in% names(lab_wide_bl), NaN, FPG),
P2hPG = ifelse(!"P2hPG" %in% names(lab_wide_bl), NaN, P2hPG)) %>%
select(index_ym, FPG, P2hPG, HbA1c)
write_csv(bl_lab_summary, "../../../data/preprocessed/bl_lab_summary.csv")
# HbA1c
bl_HbA1c_cat_summary <- lab_bl %>%
select(patient_id, index_ym, lab_item_name, lab_result_cat) %>%
filter(lab_item_name == "HbA1c") %>%
drop_na(lab_result_cat) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
lab_result_cat = as.factor(lab_result_cat),
lab_result_cat = ordered(lab_result_cat, levels = c("[4, 7)",
"[7, +)"))) %>%
group_by(index_ym, lab_result_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_HbA1c_cat_summary, "../../../data/preprocessed/bl_HbA1c_cat_summary.csv")
# FPG
bl_FPG_cat_summary <- lab_bl %>%
select(patient_id, index_ym, lab_item_name, lab_result_cat) %>%
filter(lab_item_name == "FPG") %>%
drop_na(lab_result_cat) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
lab_result_cat = as.factor(lab_result_cat),
lab_result_cat = ordered(lab_result_cat, levels = c("[3.9, 6.0)",
"[6.0, 7.0)",
"[7.0, +)"))) %>%
group_by(index_ym, lab_result_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_FPG_cat_summary, "../../../data/preprocessed/bl_FPG_cat_summary.csv")
# P2hPG
bl_P2hPG_cat_summary <- lab_bl %>%
select(patient_id, index_ym, lab_item_name, lab_result_cat) %>%
filter(lab_item_name == "P2hPG") %>%
drop_na(lab_result_cat) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
lab_result_cat = as.factor(lab_result_cat),
lab_result_cat = ordered(lab_result_cat, levels = c("[0, 7.8)",
"[7.8, 11.1)",
"[11.1, +)"))) %>%
group_by(index_ym, lab_result_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(bl_P2hPG_cat_summary, "../../../data/preprocessed/bl_P2hPG_cat_summary.csv")
# 检查/未检查
bl_lab_if_summary <- target_group %>%
left_join(lab_bl) %>%
select(patient_id, index_ym, lab_item_name) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
lab_item_name = as.factor(lab_item_name)) %>%
group_by(index_ym, patient_id, lab_item_name, .drop = FALSE) %>%
summarise(n = n()) %>%
ungroup() %>%
drop_na(patient_id) %>%
drop_na(lab_item_name) %>%
mutate(lab_if = case_when(n == 0 ~ "Untested",
n == 1 ~ "Tested",
TRUE ~ NA),
lab_if = as.factor(lab_if)) %>%
group_by(index_ym, lab_item_name, lab_if, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup() %>%
pivot_wider(names_from = lab_item_name,
values_from = patient_num)
write_csv(bl_lab_if_summary, "../../../data/preprocessed/bl_lab_if_summary.csv")
# 治疗路径 ####
endpoint_patient_flow_uncontrol <- target_group %>%
left_join(lab_uncontrol) %>%
left_join(rx_bl) %>%
left_join(rx_discontinue) %>%
left_join(rx_reorder) %>%
mutate(
continue_period = as.numeric(difftime(discontinue_date, rx_index_date, units = "days")), # 用药持续时间
reorder_period = as.numeric(difftime(reorder_datetime, discontinue_date, units = "days")) # 再次处方时间
) %>%
select(patient_id, index_date, index_ym, subgroup, bl_rx_cat,
bl_rx_desc, uncontrol, reorder_rx_cat, reorder_rx_desc,
continue_period, reorder_period)
write_csv(endpoint_patient_flow_uncontrol, "../../../data/preprocessed/endpoint_patient_flow_uncontrol.csv")
## 初始治疗流向 ####
pf_DrugNaive_Revisit <- endpoint_patient_flow_uncontrol %>%
select(patient_id, index_ym, subgroup, bl_rx_cat, bl_rx_desc) %>%
mutate(n = 1) %>%
pivot_wider(names_from = subgroup,
values_from = n,
values_fill = 0) %>%
select(patient_id, index_ym, `Drug Naive`, Revisit, bl_rx_cat, bl_rx_desc) %>%
mutate(n = 1) %>%
pivot_wider(names_from = bl_rx_cat,
values_from = n,
values_fill = 0) %>%
select(patient_id, index_ym, `Drug Naive`, Revisit, inj, oad, bl_rx_desc) %>%
mutate(n = 1) %>%
pivot_wider(names_from = bl_rx_desc,
values_from = n,
values_fill = 0) %>%
rename(Injectable = inj,
OAD = oad) %>%
select(index_ym, `Drug Naive`, Revisit, Injectable, OAD,
Premix, Basal, `GLP-1`, Bolus,
`α-glucosidase`, `DDP-4`, Biguanide, Thiazolidinedione, `SGLT-2`, Glinide, Sulfonylurea, Compound)
write_csv(pf_DrugNaive_Revisit, "../../../data/preprocessed/pf_DrugNaive_Revisit.csv")
## 未受控后流向 ####
pf_uncontrol <- endpoint_patient_flow_uncontrol %>%
select(patient_id, index_ym, subgroup, uncontrol, bl_rx_cat, bl_rx_desc,
reorder_rx_cat, reorder_rx_desc) %>%
mutate(n = 1) %>%
pivot_wider(names_from = bl_rx_cat,
values_from = n,
values_fill = 0) %>%
select(patient_id, index_ym, subgroup, inj, oad, uncontrol, bl_rx_desc, reorder_rx_cat,
reorder_rx_desc) %>%
mutate(n = 1) %>%
pivot_wider(names_from = bl_rx_desc,
values_from = n,
values_fill = 0) %>%
select(patient_id, index_ym, subgroup, oad, inj, Premix, Basal, `GLP-1`, Bolus,
`α-glucosidase`, `DDP-4`, Biguanide, Thiazolidinedione, `SGLT-2`, Glinide, Sulfonylurea, Compound,
uncontrol, reorder_rx_cat, reorder_rx_desc) %>%
mutate(n = 1) %>%
pivot_wider(names_from = reorder_rx_cat,
names_prefix = "reorder_",
values_from = n,
values_fill = 0) %>%
select(patient_id, index_ym, subgroup, inj, oad, Premix, Basal, `GLP-1`, Bolus,
`α-glucosidase`, `DDP-4`, Biguanide, Thiazolidinedione, `SGLT-2`, Glinide, Sulfonylurea, Compound,
uncontrol, reorder_inj, reorder_oad, reorder_rx_desc) %>%
mutate(n = 1) %>%
pivot_wider(names_from = reorder_rx_desc,
names_prefix = "reorder_",
values_from = n,
values_fill = 0) %>%
rename(Injectable = inj,
OAD = oad,
Uncontrol = uncontrol) %>%
# select(patient_id, index_ym, subgroup, OAD, Injectable, Premix, Basal, `GLP-1`, Bolus,
# `α-glucosidase`, `DDP-4`, Biguanide, Thiazolidinedione, `SGLT-2`, Glinide, Sulfonylurea, Compound,
# Uncontrol, reorder_inj, reorder_oad, reorder_Premix, reorder_Basal, `reorder_GLP-1`, reorder_Bolus,
# `reorder_α-glucosidase`, `reorder_DDP-4`, reorder_Biguanide, reorder_Thiazolidinedione,
# `reorder_SGLT-2`, reorder_Glinide, reorder_Sulfonylurea, reorder_Compound)
select(patient_id, index_ym, subgroup, OAD, Injectable, Premix, Basal, `GLP-1`, Bolus,
`α-glucosidase`, `DDP-4`, Biguanide, Thiazolidinedione, `SGLT-2`, Glinide, Sulfonylurea, Compound,
Uncontrol, reorder_inj, reorder_oad)
# 初诊患者
pf_DrugNaive_uncontrol <- pf_uncontrol %>%
filter(subgroup == "Drug Naive")
write_csv(pf_DrugNaive_uncontrol, "../../../data/preprocessed/pf_DrugNaive_uncontrol.csv")
# 非初诊患者
pf_Revisit_uncontrol <- pf_uncontrol %>%
filter(subgroup == "Revisit")
write_csv(pf_Revisit_uncontrol, "../../../data/preprocessed/pf_Revisit_uncontrol.csv")
## 初始治疗时间 ####
pf_period <- endpoint_patient_flow_uncontrol %>%
select(index_ym, subgroup, bl_rx_cat, bl_rx_desc, continue_period, reorder_period)
write_csv(pf_period, "../../../data/preprocessed/pf_period.csv")
# 检查指标 ####
## 季度变化-HbA1c,空腹血糖,葡萄糖负荷2小时 ####
fu_lab_summary <- lab_bl %>%
select(patient_id, index_ym, subgroup, lab_item_name, lab_result) %>%
mutate(fu_quarter = 0) %>%
bind_rows(select(lab_quarter_fu, patient_id, index_ym, subgroup, lab_item_name, lab_result, fu_quarter)) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
fu_quarter = as.factor(fu_quarter),
lab_item_name = as.factor(lab_item_name)) %>%
right_join(rx_bl) %>%
select(-patient_id)
write_csv(fu_lab_summary, "../../../data/preprocessed/fu_lab_summary.csv")
## 季度变化-低血糖 ####
# fu_hypoglycemia_summary <- lab_fu %>%
# mutate(hypoglycemia = case_when(lab_item_name == "FPG" & lab_result < 3.0 ~ "severe hypoglycemia",
# lab_item_name == "FPG" & lab_result < 3.9 ~ "severe hypoglycemia",
# TRUE ~ NA)) %>%
# drop_na(hypoglycemia) %>%
# select(patient_id, index_ym, subgroup, fu_quarter, visit_id, hypoglycemia) %>%
# distinct() %>%
# mutate(index_ym = as.factor(index_ym),
# index_ym = ordered(index_ym, levels = date_seq),
# subgroup = as.factor(subgroup),
# fu_quarter = as.factor(fu_quarter),
# hypoglycemia = as.factor(hypoglycemia)) %>%
# group_by(patient_id, index_ym, subgroup, fu_quarter, hypoglycemia, .drop = FALSE) %>%
# summarise(event_num = n()) %>%
# ungroup() %>%
# group_by(index_ym, subgroup, fu_quarter, hypoglycemia, event_num, .drop = FALSE) %>%
# summarise(patient_num = n()) %>%
# ungroup()
fu_hypoglycemia_summary <- lab_fu %>%
bind_rows(hypoglycemia_fu) %>%
mutate(hypoglycemia = case_when(lab_item_name == "FPG" & lab_result < 3.0 ~ "severe hypoglycemia",
lab_item_name == "FPG" & lab_result < 3.9 ~ "hypoglycemia",
TRUE ~ hypoglycemia)) %>%
right_join(rx_bl) %>%
drop_na(bl_rx_cat) %>%
drop_na(hypoglycemia) %>%
select(patient_id, index_ym, subgroup, fu_quarter, hypoglycemia, bl_rx_cat) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
bl_rx_cat = as.factor(bl_rx_cat),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter),
hypoglycemia = as.factor(hypoglycemia)) %>%
group_by(index_ym, subgroup, fu_quarter, hypoglycemia, bl_rx_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(fu_hypoglycemia_summary, "../../../data/preprocessed/fu_hypoglycemia_summary.csv")
## 季度变化后续方案-HbA1c,空腹血糖,葡萄糖负荷2小时 ####
fu_lab_reorder_summary <- lab_quarter_fu_reorder %>%
select(patient_id, index_ym, lab_item_name, lab_result, reorder_rx_cat, reorder_fu_quarter) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
reorder_fu_quarter = as.factor(reorder_fu_quarter),
lab_item_name = as.factor(lab_item_name)) %>%
right_join(rx_bl) %>%
drop_na(bl_rx_cat) %>%
select(-patient_id)
write_csv(fu_lab_reorder_summary, "../../../data/preprocessed/fu_lab_reorder_summary.csv")
## 季度变化后续方案-低血糖 ####
fu_hypoglycemia_reorder_summary <- lab_quarter_fu_reorder %>%
bind_rows(hypoglycemia_fu_reorder) %>%
mutate(hypoglycemia = case_when(lab_item_name == "FPG" & lab_result < 3.0 ~ "severe hypoglycemia",
lab_item_name == "FPG" & lab_result < 3.9 ~ "hypoglycemia",
TRUE ~ hypoglycemia)) %>%
right_join(rx_bl) %>%
drop_na(bl_rx_cat) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
bl_rx_cat = as.factor(bl_rx_cat),
reorder_rx_cat = as.factor(reorder_rx_cat),
reorder_fu_quarter = as.factor(reorder_fu_quarter),
hypoglycemia = as.factor(hypoglycemia)) %>%
drop_na(hypoglycemia) %>%
select(patient_id, index_ym, subgroup, reorder_rx_cat, reorder_fu_quarter, hypoglycemia, bl_rx_cat) %>%
distinct() %>%
group_by(index_ym, subgroup, reorder_rx_cat, reorder_fu_quarter, hypoglycemia, bl_rx_cat, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(fu_hypoglycemia_reorder_summary, "../../../data/preprocessed/fu_hypoglycemia_reorder_summary.csv")
## 未用药物治疗季度变化-HbA1c,空腹血糖,葡萄糖负荷2小时 ####
fu_lab_norx_summary <- lab_bl %>%
select(patient_id, index_ym, subgroup, lab_item_name, lab_result) %>%
mutate(fu_quarter = 0) %>%
bind_rows(select(lab_quarter_fu, patient_id, index_ym, subgroup, lab_item_name, lab_result, fu_quarter)) %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
fu_quarter = as.factor(fu_quarter),
lab_item_name = as.factor(lab_item_name)) %>%
right_join(rx_bl %>%
filter(is.na(bl_rx_cat)) %>%
select(patient_id)) %>%
select(-patient_id)
write_csv(fu_lab_norx_summary, "../../../data/preprocessed/fu_lab_norx_summary.csv")
## 血糖检查 ####
# 次数
fu_bg_visit_summary <- lab_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, visit_id, lab_item_name) %>%
filter(grepl("FPG|P2hPG|HbA1c", lab_item_name)) %>%
# select(-lab_item_name) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
lab_item_name = as.factor(lab_item_name),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, subgroup, lab_item_name, fu_quarter, .drop = FALSE) %>%
summarise(bg_visit_num = n()) %>%
ungroup()
write_csv(fu_bg_visit_summary, "../../../data/preprocessed/fu_bg_visit_summary.csv")
# 人数
fu_bg_patient_summary <- lab_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, lab_item_name) %>%
filter(grepl("FPG|P2hPG|HbA1c", lab_item_name)) %>%
# select(-lab_item_name) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
lab_item_name = as.factor(lab_item_name),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, subgroup, lab_item_name, fu_quarter, .drop = FALSE) %>%
summarise(bg_patient_num = n()) %>%
ungroup()
write_csv(fu_bg_patient_summary, "../../../data/preprocessed/fu_bg_patient_summary.csv")
## 血糖条件 ####
fu_unnormal_visit_summary <- lab_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, visit_id, lab_unnormal) %>%
drop_na(lab_unnormal) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(patient_id, index_ym, subgroup, fu_quarter, .drop = FALSE) %>%
summarise(lab_unnormal_num = sum(lab_unnormal)) %>%
filter(lab_unnormal_num >= 2) %>%
ungroup() %>%
group_by(index_ym, subgroup, fu_quarter, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
fu_FPG_normal_visit_summary <- lab_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, lab_FPG_normal) %>%
drop_na(lab_FPG_normal) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, subgroup, fu_quarter, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
fu_HbA1c_normal_visit_summary <- lab_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, lab_HbA1c_normal) %>%
drop_na(lab_HbA1c_normal) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, subgroup, fu_quarter, .drop = FALSE) %>%
summarise(patient_num = n()) %>%
ungroup()
write_csv(fu_unnormal_visit_summary, "../../../data/preprocessed/fu_unnormal_visit_summary.csv")
write_csv(fu_FPG_normal_visit_summary, "../../../data/preprocessed/fu_FPG_normal_visit_summary.csv")
write_csv(fu_HbA1c_normal_visit_summary, "../../../data/preprocessed/fu_HbA1c_normal_visit_summary.csv")
## C肽 ####
# 次数
fu_cp_visit_summary <- cp_wide_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, visit_id, cp_abnormal) %>%
drop_na(cp_abnormal) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, fu_quarter, subgroup, .drop = FALSE) %>%
summarise(cp_abnormal_num = sum(cp_abnormal))
write_csv(fu_cp_visit_summary, "../../../data/preprocessed/fu_cp_visit_summary.csv")
# 人数
fu_cp_patient_summary <- cp_wide_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, cp_abnormal) %>%
drop_na(cp_abnormal) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, fu_quarter, subgroup, .drop = FALSE) %>%
summarise(cp_abnormal_num = sum(cp_abnormal)) %>%
ungroup()
fu_cp_patient_if_summary <- cp_wide_fu %>%
filter(!(is.na(`FCP`) & is.na(`1hCP`) & is.na(`2hCP`) & is.na(`3hCP`))) %>%
mutate(cp_if = 1) %>%
select(patient_id, index_ym, subgroup, fu_quarter, cp_if) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, fu_quarter, subgroup, .drop = FALSE) %>%
summarise(cp_if_num = sum(cp_if)) %>%
ungroup()
write_csv(fu_cp_patient_summary, "../../../data/preprocessed/fu_cp_patient_summary.csv")
write_csv(fu_cp_patient_if_summary, "../../../data/preprocessed/fu_cp_patient_if_summary.csv")
## 血压 ####
# 次数
fu_vital_visit_summary <- vital_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, visit_id, hypertension) %>%
drop_na(hypertension) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, subgroup, fu_quarter, hypertension, .drop = FALSE) %>%
summarise(vital_num = n()) %>%
ungroup()
write_csv(fu_vital_visit_summary, "../../../data/preprocessed/fu_vital_visit_summary.csv")
# 人数
fu_vital_patient_summary <- vital_fu %>%
select(patient_id, index_ym, subgroup, fu_quarter, hypertension) %>%
drop_na(hypertension) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, subgroup, fu_quarter, hypertension, .drop = FALSE) %>%
summarise(vital_num = n()) %>%
ungroup()
# 检查人数
fu_vital_if_patient_summary <- vital_fu %>%
drop_na(舒张压, 收缩压) %>%
mutate(vital_if = 1) %>%
select(patient_id, index_ym, subgroup, fu_quarter, vital_if) %>%
distinct() %>%
mutate(index_ym = as.factor(index_ym),
index_ym = ordered(index_ym, levels = date_seq),
subgroup = as.factor(subgroup),
fu_quarter = as.factor(fu_quarter)) %>%
group_by(index_ym, subgroup, fu_quarter, .drop = FALSE) %>%
summarise(vital_if_num = sum(vital_if)) %>%
ungroup()
write_csv(fu_vital_patient_summary, "../../../data/preprocessed/fu_vital_patient_summary.csv")
write_csv(fu_vital_if_patient_summary, "../../../data/preprocessed/fu_vital_if_patient_summary.csv")
## 短期胰岛素强化治疗 ####
# 对于T2DM初诊即使用注射治疗的患者,依据其后续处方时间间隔判断其是否接受为短期胰岛素强化治疗。
# 如患者在后续1月内无再次注射剂处方,则判定为接受胰岛素强化治疗。
# 短期胰岛素强化治疗:适用于糖化血红蛋白 ≥ 9.0%或空腹血糖 ≥ 11.1 mmol/L,或伴有明显高血糖症状的新诊断2型糖尿病患者,
# 以及已经使用多种口服降糖药联合治疗3个月以上血糖仍明显升高(如糖化血红蛋白≥9.0%),
# 或使用胰岛素治疗并经过充分的剂量调整血糖仍未达标的T2DM初诊患者
Naive_inj_group <- target_group %>%
left_join(rx_bl) %>%
filter(subgroup == "Drug Naive" & bl_rx_cat == "inj" & bl_rx_desc != "GLP-1")
inj_1m <- Naive_inj_group %>%
left_join(rx_clean) %>%
filter(rx_cat == "inj" & std_rx_desc != "GLP-1") %>%
filter(rx_index_date + bl_coverage_time < order_datetime & rx_index_date + bl_coverage_time + 30 >= order_datetime) %>%
select(patient_id) %>%
distinct()
rx_iit <- Naive_inj_group %>%
anti_join(inj_1m) %>%
left_join(rx_clean %>%
filter(rx_cat == "inj" & std_rx_desc != "GLP-1")) %>%
filter(rx_index_date + bl_coverage_time + 30 < order_datetime) %>%
group_by(patient_id) %>%
arrange(order_datetime) %>%
slice_head(n = 1) %>%
ungroup() %>%
mutate(iit_period = as.numeric(order_datetime - rx_index_date),
iit_same = if_else(std_rx_desc == bl_rx_desc, 1, 0)) %>%
select(subgroup, index_ym, bl_rx_desc, bl_rx_cat, bl_rx_desc,
rx_cat, std_rx_desc, iit_period, iit_same)
write_csv(rx_iit, "../../../data/preprocessed/rx_iit.csv")
save.image(here("data", "clean", "dataset_summary.RData"))
\ No newline at end of file
# Global options
options(scipen = 1000)
options(scipen = 1, digits = 2)
options(encoding = 'UTF-8')
library(here)
source(here("codes","preprocess","R","common.R"))
source(here("codes","preprocess","R","wrangling.R"))
patient_clean <- arrow::read_parquet("../../../data/preprocessed/patient_clean.parquet")
visit_clean <- arrow::read_parquet("../../../data/preprocessed/visit_clean.parquet")
diag_clean <- arrow::read_parquet("../../../data/preprocessed/diag_clean.parquet")
rx_clean <- arrow::read_parquet("../../../data/preprocessed/rx_clean.parquet")
lab_clean <- arrow::read_parquet("../../../data/preprocessed/lab_clean.parquet")
vital_clean <- arrow::read_parquet("../../../data/preprocessed/vital_clean.parquet")
# 数据时间范围:2021-01-01 ~ 2024-12-31
# Index date : 初诊"2型糖尿病": 2022-01-01 ~ 2024-12-31
# Baseline : Index date - 3 * 30.5
# Followup : 每季度
# Subgroup :
# (初诊) 为空腹血糖>7.0 mmol/L和或糖化血红蛋白>7%并排除既往诊断糖尿病及服用降糖药物者
# (复诊) 空腹血糖>7.0 mmol/L和或糖化血红蛋白>7%,且既往诊断糖尿病及服用降糖药物者
# uncontrolled(lab) :
# 复诊口服失控:用药后2个月内HbA1c >=7% (如有多个结果以最后一个为准)
# 复诊注射失控:用药后2个月内HbA1c >=7%
# 换用方案: 失控后第一次就诊是什么方案
# 停药:用药间隔超过90天视为停药
# 重启:停药后再用药
# 入组日
diag_index <- diag_clean %>%
# filter(raw_dx_desc == "2型糖尿病") %>%
filter(T2DM == 1) %>%
filter(diagnosis_datetime >= "2018-01-01" & diagnosis_datetime <= "2022-12-31") %>%
# filter(diagnosis_datetime >= "2022-01-01" & diagnosis_datetime <= "2024-12-31") %>%
group_by(patient_id) %>%
arrange(diagnosis_datetime) %>%
slice_head(n = 1) %>%
ungroup() %>%
select(patient_id, index_visit_id = visit_id, index_date = diagnosis_datetime)
# left_join(diag_subgroup) %>%
# mutate(index_ym = format(as.Date(diagnosis_datetime), "%Y-%m")) %>%
# 年龄和性别变量缺失
demo_anti <- diag_index %>%
left_join(patient_clean) %>%
drop_na(sex, birth_date) %>%
filter(grepl("男|女", sex)) %>%
select(patient_id) %>%
distinct()
# 妊娠糖尿病和1型糖尿病
diag_anti <- diag_index %>%
left_join(diag_clean) %>%
filter(std_dx_desc == "1型糖尿病"|std_dx_desc == "妊娠糖尿病") %>%
select(patient_id) %>%
distinct()
# 既往糖尿病
t2dm_past <- diag_index %>%
left_join(diag_clean) %>%
# filter(raw_dx_desc == "2型糖尿病" & diagnosis_datetime < index_date) %>%
filter(T2DM == 1 & diagnosis_datetime < index_date) %>%
mutate(past_t2dm = 1) %>%
select(patient_id, past_t2dm) %>%
distinct()
# 既往降糖药
rx_past <- diag_index %>%
left_join(rx_clean) %>%
drop_na(rx_desc) %>%
filter(order_datetime < index_date) %>%
mutate(past_rx = 1) %>%
select(patient_id, past_rx) %>%
distinct()
# 识别期内空腹血糖 > 7.0 mmol/L和或糖化血红蛋白 > 7%
# a1c_fpg_bl <- diag_index %>%
# left_join(lab_clean) %>%
# # filter(result_datetime >= index_date - 90 & result_datetime <= index_date) %>%
# # group_by(patient_id, lab_item_name) %>%
# # arrange(result_datetime) %>%
# # slice_tail(n = 1) %>%
# # ungroup()
# mutate(lab_if = case_when(lab_item_name == "HbA1c" & lab_result > 7.0 ~ 1,
# lab_item_name == "FPG" & lab_result > 7.0 ~ 1)) %>%
# filter(lab_if == 1) %>%
# select(patient_id) %>%
# distinct()
# 初诊 & 非初诊
# diag_subgroup <- diag_index %>%
# left_join(diag_clean) %>%
# filter(raw_dx_desc == "2型糖尿病") %>%
# group_by(patient_id) %>%
# arrange(diagnosis_datetime) %>%
# slice_head(n = 1) %>%
# ungroup() %>%
# mutate(subgroup = case_when(diagnosis_datetime >= "2018-01-01" & diagnosis_datetime <= "2020-12-31" ~ "Drug Naive", #identication period
# diagnosis_datetime < "2018-01-01" ~ "Revisit")) %>% #identication period
# mutate(subgroup = case_when(diagnosis_datetime >= "2022-01-01" & diagnosis_datetime <= "2024-12-31" ~ "Drug Naive", # identication period
# diagnosis_datetime < "2022-01-01" ~ "Revisit")) %>% # identication period
# select(patient_id, subgroup) %>%
# drop_na()
diag_subgroup <- diag_index %>%
# inner_join(a1c_fpg_bl) %>%
left_join(t2dm_past) %>%
left_join(rx_past) %>%
mutate(subgroup = case_when(is.na(past_t2dm) & is.na(past_rx) ~ "Drug Naive", #identication period
past_t2dm == 1|past_rx == 1 ~ "Revisit")) %>% #identication period
select(patient_id, subgroup) %>%
drop_na()
# 纳排
target_group <- diag_index %>%
inner_join(demo_anti) %>%
anti_join(diag_anti) %>%
inner_join(diag_subgroup) %>%
select(patient_id, index_visit_id, index_date, subgroup) %>%
mutate(index_ym = format(as.Date(index_date), "%Y-%m"))
write_csv(target_group, "../../../data/preprocessed/target_group.csv")
# Global options
library(showtext)
options(scipen = 1000)
options(scipen = 1, digits = 2)
options(encoding = 'UTF-8')
showtext.auto()
# Sys.setlocale("LC_ALL", "zh_CN.UTF-8")
library(tidyverse)
library(here)
library(rio)
library(readxl)
library(janitor)
library(gtsummary)
library(survival)
library(officer)
library(officedown)
library(flextable)
library(zoo)
# library(eoffice)
library(tableone)
library(plotly)
source(here("codes","preprocess","R","common.R"))
# 导入数据
# 患者表
patient_raw <- read_csv("../../../data/cleandata/patient.csv")
patient_clean <- patient_raw %>%
distinct() %>%
select(patient_id, sex, birth_date) %>%
mutate(across(.cols = c(patient_id, sex), .fns = as.character),
across(contains("date"), ~ as.Date(.x)),
std_sex = recode(sex,
'女' = 'Female',
'男' = 'Male')) %>%
mutate(patient_id = sub("DT_", "", patient_id))
arrow::write_parquet(patient_clean, "../../../data/preprocessed/patient_clean.parquet")
rm(patient_raw)
gc()
# 随访表
visit_raw <- read_csv("../../../data/cleandata/visit.csv")
visit_clean <- visit_raw %>%
distinct() %>%
mutate(across(.cols = c(patient_id, visit_id), .fns = as.character),
across(contains("date"), ~ as.Date(.x)))
arrow::write_parquet(visit_clean, "../../../data/preprocessed/visit_clean.parquet")
rm(visit_raw)
gc()
# 诊断表
diag_raw <- read_csv("../../../data/cleandata/diagnosis.csv")
diag_clean <- diag_raw %>%
mutate(T2DM = case_when(grepl("糖尿病|消渴(症|病)", dx_desc) ~ 1,
grepl("E1(1|2|3|4)", dx) ~ 1,
TRUE ~ NA),
hypoglycemia = case_when(grepl("低血糖", dx_desc) ~ 1,
grepl("E16.2", dx) ~ 1,
TRUE ~ NA)) %>%
filter(!(is.na(T2DM) & is.na(hypoglycemia) & is.na(std_dx_desc))) %>%
separate_rows(std_dx_desc, sep = "\\,|\\;|\\;|\\,|\\、|\\n") %>%
distinct() %>%
mutate(across(.cols = c(patient_id, visit_id, dx_desc), .fns = as.character),
across(contains("date"), ~ as.Date(.x))) %>%
mutate(
std_dx_desc = recode(std_dx_desc,
'卒中' = 'Stroke',
'背景性视网膜病变' = 'BDR',
'增殖性视网膜病变' = 'PDR',
'黄斑水肿' = 'Macular Edema',
'重度视觉丧失' = 'Severe Visual Loss',
'大量白蛋白尿' = 'Macroalbuminuria',
'微量白蛋白尿' = 'Microalbuminuria',
'症状性神经病变' = 'Symptomatic Neuropathy',
'外周血管病' = 'PVD',
'终末期肾病' = 'ESRD',
'缺血性心脏病' = 'IHD',
'心力衰竭' = 'Heart Failure',
'心肌梗死' = 'MI',
'血脂异常' = 'Dyslipidemia',
'高血压' = 'Hypertension',
'下肢截肢' = 'LLA'),
dx_cat = case_when(
grepl("BDR|PDR|Macular Edema|Severe Visual Loss", std_dx_desc) ~ "Ocular Complications",
grepl("Symptomatic Neuropathy|PVD|LLA", std_dx_desc) ~ "Lower Limb Complications",
grepl("Microalbuminuria|Macroalbuminuria|ESRD", std_dx_desc) ~ "Nephropathy Complications",
grepl("IHD|MI|Stroke|Heart Failure", std_dx_desc) ~ "Macrovascular Complications",
TRUE ~ NA))
arrow::write_parquet(diag_clean, "../../../data/preprocessed/diag_clean.parquet")
rm(diag_raw)
gc()
# 用药表
rx_raw <- read_csv("../../../data/cleandata/prescribing.csv")
rx_sep <- rx_raw %>%
mutate(visit_id = as.character(visit_id)) %>%
drop_na(std_rx_desc) %>%
distinct() %>%
left_join(select(visit_clean, visit_id, patient_id, patient_type, admission_datetime, specialty = raw_specialty, provider_id)) %>%
# mutate(order_datetime = case_when(is.na(order_datetime) ~ admission_datetime,
# TRUE ~ order_datetime)) %>%
mutate(order_datetime = case_when(!is.na(rx_start_datetime) ~ rx_start_datetime,
is.na(rx_start_datetime) ~ order_datetime,
TRUE ~ admission_datetime)) %>%
separate(drug_spec, c("drug_spec_unit", "drug_spec_qty"), sep = "\\*|x|X|×|\\s|\\n|\\:|\\:|\\/", extra = "merge", remove = FALSE) %>%
mutate(across(.cols = c(patient_id, visit_id), .fns = as.character),
across(contains("date"), ~ as.Date(.x)))
rx_clean <- rx_sep %>%
mutate(
std_rx_desc = recode(std_rx_desc,
'基础胰岛素' = 'Basal',
'餐时胰岛素' = 'Bolus',
'预混胰岛素' = 'Premix',
'Dual(双联)' = 'Dual',
'α-糖苷酶抑制剂类' = 'α-glucosidase',
'DPP4i' = 'DDP-4',
'双胍类' = 'Biguanide',
'噻唑烷二酮类' = 'Thiazolidinedione',
'SGLT2i' = 'SGLT-2',
'格列奈类' = 'Glinide',
'磺脲类' = 'Sulfonylurea',
'复方制剂' = 'Compound'),
rx_cat = case_when(
grepl("α-glucosidase|DDP-4|Biguanide|Thiazolidinedione|SGLT-2|Glinide|Sulfonylurea|Compound", std_rx_desc) ~ "oad",
grepl("Basal|Bolus|Premix|GLP-1|Dual", std_rx_desc) ~ "inj",
TRUE ~ NA),
freq_name = case_when(grepl("hs|HS|qd|QD|am|pm|AM|PM|qn|QN|prn|PRN|st|ST|sos|一天一次|餐|once|ONCE|qm|QM|晚|1次\\/日|每天早上一次|每天晨起", std_frequency) ~ 1,
grepl("bid|BID|q12h|Q12H|一天两次|2次\\/日|早上中午各一次", std_frequency) ~ 2,
grepl("TID|tid|q8h|Q8H", std_frequency) ~ 3,
grepl("qid|q6h|QID|Q6H", std_frequency) ~ 4,
grepl("一天五次|5id", std_frequency) ~ 5,
grepl("q4h|6id|Q4H|6ID", std_frequency) ~ 6,
grepl("q2h|Q2H|Q1H|q1h", std_frequency) ~ 12,
grepl("qh|QH", std_frequency) ~ 24,
grepl("q72h|Q72H", std_frequency) ~ 0.33,
grepl("q7h|Q7H", std_frequency) ~ 3.43,
grepl("qod|QOD", std_frequency) ~ 0.5,
grepl("q1/2h|Q1/2H", std_frequency) ~ 48,
grepl("q3d|Q3D", std_frequency) ~ 0.33,
grepl("q5d|Q5D", std_frequency) ~ 0.2,
grepl("q6d|Q6D", std_frequency) ~ 0.17,
grepl("q2d|Q2D", std_frequency) ~ 0.5,
grepl("q4d|q4d|四天一次", std_frequency) ~ 0.25,
grepl("tiw|TIW|一三五|二四六", std_frequency) ~ 0.43,
grepl("q2w|qow|Q2W|QOW", std_frequency) ~ 0.07,
grepl("QW|qw|每周|1次/7日", std_frequency) ~ 0.14,
grepl("biw|BIW", std_frequency) ~ 0.29,
grepl("qiw|QIW", std_frequency) ~ 0.57),
freq_name = as.numeric(freq_name), # 用药频次
dosage_qty = as.numeric(dosage_qty), # 单次用药剂量
quantity = as.numeric(quantity), # 药物总量(开药数量)
qty_per_unit = case_when( # 每单位剂量(开药的单位换算)
grepl("mg", quantity_uom) & grepl("mg", dosage_unit) ~ 1,
grepl("μ", quantity_uom) & grepl("μ", dosage_unit) ~ 1,
grepl("u|IU", dosage_unit) ~ 300,
grepl("片", dosage_unit) ~ 30,
grepl("支", dosage_unit) ~ as.numeric(readr::parse_number(drug_spec_qty)),
grepl("瓶|μ", dosage_unit) ~ as.numeric(readr::parse_number(drug_spec_unit)),
TRUE ~ as.numeric(readr::parse_number(drug_spec_unit))*as.numeric(readr::parse_number(drug_spec_qty)))
) %>%
mutate( #计算每次处方用药天数
coverage_time = case_when(
order_datetime <= rx_end_datetime &
is.na(order_datetime) == FALSE &
is.na(rx_end_datetime) == FALSE ~ as.numeric(rx_end_datetime - order_datetime + 1),
TRUE ~ (qty_per_unit*quantity)/(dosage_qty*freq_name)),
coverage_time = case_when(
is.na(coverage_time) ~ 1,
TRUE ~ as.numeric(coverage_time)),
coverage_time = ceiling(coverage_time)
)
arrow::write_parquet(rx_clean, "../../../data/preprocessed/rx_clean.parquet")
rm(rx_raw)
gc()
# 实验室检查表
# 检查单位
lab_raw <- read_csv("../../../data/cleandata/lab_result_cm.csv")
lab_clean <- lab_raw %>%
drop_na(std_lab_item_name) %>%
distinct() %>%
mutate(across(.cols = c(patient_id, visit_id), .fns = as.character),
across(contains("date"), ~ as.Date(.x)),
lab_result = as.numeric(result_num),
std_lab_item_name = recode(std_lab_item_name,
'FPG' = 'FPG',
'HbA1c' = 'HbA1c',
'葡萄糖负荷2小时血糖' = 'P2hPG',
'餐后1小时C肽' = '1hCP',
'餐后2小时C肽' = '2hCP',
'餐后3小时C肽' = '3hCP',
'空腹C肽' = 'FCP')) %>%
select(patient_id, visit_id, patient_type, lab_item_name = std_lab_item_name,
specimen_source, lab_result, result_datetime)
arrow::write_parquet(lab_clean, "../../../data/preprocessed/lab_clean.parquet")
rm(lab_raw)
gc()
# 体征表
# 检查单位
vital_raw <- read_csv("../../../data/cleandata/vital.csv")
vital_clean <- vital_raw %>%
mutate(across(.cols = c(patient_id, visit_id), .fns = as.character),
across(contains("date"), ~ as.Date(.x))) %>%
distinct() %>%
filter(grepl("收缩压|舒张压", item_name))
arrow::write_parquet(vital_clean, "../../../data/preprocessed/vital_clean.parquet")
rm(vital_raw)
gc()
This source diff could not be displayed because it is too large. You can view the blob instead.
{
"cultural_date_format": "%d-%m-%Y",
"languages": ["English", "中文"],
"translation": [
{
"English": "Sample data",
"中文": "样本数据"
},
{
"English": "Choose the start date",
"中文": "选择初始日期"
},
{
"English": "Choose the end date",
"中文": "选择结束日期"
},
{
"English": "Characteristics of the Patient",
"中文": "人群特征"
},
{
"English": "Dataset Time Span",
"中文": "数据年份跨度"
},
{
"English": "Total Number of Patients",
"中文": "患者总数"
},
{
"English": "Filter",
"中文": "筛选年份"
},
{
"English": "Number of Patients After Filtering",
"中文": "筛选后患者数"
},
{
"English": "Gender",
"中文": "性别"
},
{
"English": "Male",
"中文": "男"
},
{
"English": "Female",
"中文": "女"
},
{
"English": "Patient Age Distribution",
"中文": "患者年龄分布"
},
{
"English": "Patient Age distribution",
"中文": "患者年龄分布"
},
{
"English": "Age Group (Years)",
"中文": "年龄分组(岁)"
},
{
"English": "Number of patients",
"中文": "患者人数"
},
{
"English": "Number of Patients",
"中文": "患者人数"
},
{
"English": "Number of Patients:",
"中文": "患者人数:"
},
{
"English": "Condition of Disease",
"中文": "疾病情况"
},
{
"English": "Comorbidity",
"中文": "共病情况"
},
{
"English": "Patients Comorbidity Distribution",
"中文": "患者共病情况分布"
},
{
"English": "Stroke",
"中文": "卒中"
},
{
"English": "BDR",
"中文": "背景性视网膜病变"
},
{
"English": "PDR",
"中文": "增殖性视网膜病变"
},
{
"English": "Macular Edema",
"中文": "黄斑水肿"
},
{
"English": "Severe Visual Loss",
"中文": "重度视觉丧失"
},
{
"English": "Macroalbuminuria",
"中文": "大量白蛋白尿"
},
{
"English": "Microalbuminuria",
"中文": "微量白蛋白尿"
},
{
"English": "Symptomatic Neuropathy",
"中文": "症状性神经病变"
},
{
"English": "PVD",
"中文": "外周血管病"
},
{
"English": "ESRD",
"中文": "终末期肾病"
},
{
"English": "IHD",
"中文": "缺血性心脏病"
},
{
"English": "Heart Failure",
"中文": "心力衰竭"
},
{
"English": "MI",
"中文": "心肌梗死"
},
{
"English": "Dyslipidemia",
"中文": "血脂异常"
},
{
"English": "Hypertension)",
"中文": "高血压"
},
{
"English": "Complication",
"中文": "并发症情况"
},
{
"English": "Baseline",
"中文": "基线 "
},
{
"English": "Frequency of Complication over Baseline Period",
"中文": "基线期间并发症的发生频率"
},
{
"English": "0-12 Months",
"中文": "0-12个月"
},
{
"English": "Frequency of Complication over 0 to 12 Months Followed-up Period",
"中文": "0 至 12 个月随访期间并发症的发生频率"
},
{
"English": "0-24 Months",
"中文": "0-24个月"
},
{
"English": "Frequency of Complication over 0 to 24 Months Followed-up Period",
"中文": "0 至 24 个月随访期间并发症的发生频率"
},
{
"English": "0-36 Months",
"中文": "0-36个月"
},
{
"English": "Frequency of Complication over 0 to 36 Months Followed-up Period",
"中文": "0 至 36 个月随访期间并发症的发生频率"
},
{
"English": "Lower Limb Complications",
"中文": "下肢并发症"
},
{
"English": "Macrovascular Complications",
"中文": "大血管并发症"
},
{
"English": "Nephropathy Complications",
"中文": "肾病并发症"
},
{
"English": "Ocular Complications",
"中文": "眼部并发症"
},
{
"English": "Glycemic Control",
"中文": "血糖控制"
},
{
"English": "Line Plot",
"中文": "折线图"
},
{
"English": "FPG",
"中文": "空腹血糖"
},
{
"English": "FPG (mmol/L)",
"中文": "空腹血糖(mmol/L)"
},
{
"English": "Mean FPG of Patients over Follow-Up Period",
"中文": "随访期间患者的平均空腹血糖"
},
{
"English": "P2hPG",
"中文": "餐后两小时血糖"
},
{
"English": "P2hPG (mmol/L)",
"中文": "餐后两小时血糖(mmol/L)"
},
{
"English": "Mean P2hPG of Patients over Follow-Up Period",
"中文": "随访期间患者的平均餐后两小时血糖"
},
{
"English": "HbA1c",
"中文": "糖化血红蛋白"
},
{
"English": "HbA1c (%)",
"中文": "糖化血红蛋白(%)"
},
{
"English": "Mean HbA1c of Patients over Follow-Up Period",
"中文": "随访期间患者的平均糖化血红蛋白"
},
{
"English": "Follow-Up Time (Months)",
"中文": "随访时间(月)"
},
{
"English": "Follow-Up Month:",
"中文": "随访月份:"
},
{
"English": "Mean ± SD:",
"中文": "平均值±标准差:"
},
{
"English": "Box Plot",
"中文": "箱线图"
},
{
"English": "Distribution of FPG of Patients over Follow-Up Period",
"中文": "随访期间患者空腹血糖分布"
},
{
"English": "Distribution of P2hPG of Patients over Follow-Up Period",
"中文": "随访期间患者餐后两小时血糖分布"
},
{
"English": "Distribution of HbA1c of Patients over Follow-Up Period",
"中文": "随访期间患者糖化血红蛋白分布"
},
{
"English": "Patient Flow",
"中文": "人群流向"
},
{
"English": "Treatment Path",
"中文": "治疗路径"
},
{
"English": "All Patients",
"中文": "所有患者"
},
{
"English": "Drug Naive",
"中文": "初诊"
},
{
"English": "Revisit",
"中文": "非初诊"
},
{
"English": "Drug Naive-OAD",
"中文": "初诊-口服药"
},
{
"English": "Drug Naive-Injectable",
"中文": "初诊-注射药"
},
{
"English": "Revisit-OAD",
"中文": "非初诊-口服药"
},
{
"English": "Revisit-Injectable",
"中文": "非初诊-注射药"
},
{
"English": "Basal",
"中文": "基础"
},
{
"English": "Bonus",
"中文": "餐时"
},
{
"English": "Premix",
"中文": "预混"
},
{
"English": "Others",
"中文": "其他"
},
{
"English": "GLP-1",
"中文": "胰高血糖素样肽-1"
},
{
"English": "OAD Uncontrolled",
"中文": "口服未受控"
},
{
"English": "OAD uncontrolled",
"中文": "口服未受控"
},
{
"English": "Uncontrolled",
"中文": "未受控"
},
{
"English": "Initiate Injection",
"中文": "初始注射"
},
{
"English": "Injectable uncontrolled",
"中文": "注射未受控"
},
{
"English": "Injectable Uncontrolled",
"中文": "注射未受控"
},
{
"English": "Basal Uncontrolled",
"中文": "基础未受控"
},
{
"English": "Premix Uncontrolled",
"中文": "预混未受控"
},
{
"English": "GLP-1 Uncontrolled",
"中文": "胰高血糖素样肽-1未受控"
},
{
"English": "Bonus Uncontrolled",
"中文": "餐时未受控"
},
{
"English": "Basal Uncontrolled Switch",
"中文": "基础未受控切换方案"
},
{
"English": "Premix Uncontrolled Switch",
"中文": "预混未受控切换方案"
},
{
"English": "GLP-1 Uncontrolled Switch",
"中文": "胰高血糖素样肽-1未受控切换方案"
},
{
"English": "Bonus Uncontrolled Switch",
"中文": "餐时未受控切换方案"
},
{
"English": "Uncontrolled Not Switch",
"中文": "未受控不切换方案"
},
{
"English": "OAD",
"中文": "口服"
},
{
"English": "Drug Naive Patient",
"中文": "初诊患者"
},
{
"English": "DM Duration",
"中文": "注射治疗的起始时间"
},
{
"English": "Time to First Refill",
"中文": "首次重新注射时间"
},
{
"English": "Percentage of Patients",
"中文": "患者比例"
},
{
"English": "Insulin Detemir",
"中文": "地特胰岛素"
},
{
"English": "Insulin Glargine",
"中文": "甘精胰岛素"
},
{
"English": "Insulin Aspart 30",
"中文": "门冬胰岛素30"
},
{
"English": "Insulin Aspart",
"中文": "门冬胰岛素"
},
{
"English": "Isophane Protamine Biosynthetic Human Insulin",
"中文": "精蛋白生物合成人胰岛素"
},
{
"English": "Protamine Zinc Insulin ",
"中文": "精蛋白锌胰岛素"
},
{
"English": "Premixed Protamine Recombinant Human Insulin",
"中文": "精蛋白重组人胰岛素混合"
},
{
"English": "Liraglutide ",
"中文": "利拉鲁肽"
},
{
"English": "Biosynthetic Human Insulin ",
"中文": "生物合成人胰岛素"
},
{
"English": "Recombinant Human Insulin ",
"中文": "重组人胰岛素"
},
{
"English": "Recombinant Glargine Insulin ",
"中文": "重组甘精胰岛素"
},
{
"English": "Isophane Protamine Biosynthetic Human Insulin (Novo Nordisk)",
"中文": "精蛋白生物合成人胰岛素(诺和诺德)"
},
{
"English": "Insulin Aspart 30 (Novo Nordisk)",
"中文": "门冬胰岛素30(诺和诺德)"
},
{
"English": "Premixed Protamine Recombinant Human Insulin (TUL)",
"中文": "精蛋白重组人胰岛素混合"
},
{
"English": "Protamine Zinc Insulin (Wanbang)",
"中文": "精蛋白锌胰岛素(江苏万邦)"
},
{
"English": "Insulin Aspart 30 (Unknown)",
"中文": "门冬胰岛素30(未知)"
},
{
"English": "Isophane Protamine Biosynthetic Human Insulin (Unknown)",
"中文": "精蛋白生物合成人胰岛素(未知)"
},
{
"English": "Regimen Switch Comparison",
"中文": "方案切换对比"
},
{
"English": "Regiment Switch After First Drop (90 Days Interval)",
"中文": "断药90天后的方案"
},
{
"English": "Time to Restart Injectable Regimen",
"中文": "初诊患者 重启注射的时间"
},
{
"English": "Regimen",
"中文": "方案"
},
{
"English": "mean ± sd",
"中文": "平均值±标准差"
},
{
"English": "Restart With The Same Regimen of Premixed Insulin",
"中文": "重启注射方案的种类 预混"
},
{
"English": "Generic name",
"中文": "通用名"
},
{
"English": "N:",
"中文": "人数:"
},
{
"English": "Restart with the same generic name (n, %)",
"中文": "重新注射同种注射方案(人数,比例)"
},
{
"English": "Restart With The Same Regimen of Basal Insulin",
"中文": "重启注射方案的种类 基础"
},
{
"English": "Injectable DOT Comparison",
"中文": "注射治疗时长对比"
},
{
"English": "Drug Naive Patient Time to First Drop (90 Days Interval)",
"中文": "初诊患者首次注射时长(90天内)"
},
{
"English": "OAD Uncontrolled Patient Time to First Drop (90 Days Interval)",
"中文": "口服未受控患者注射时长(90天内)"
},
{
"English": "Injectable Uncontrolled Patient Time to First Drop (90 Days Interval)",
"中文": "注射未受控首次注射时长(90天内)"
},
{
"English": "Time to Non-persistence (Mean ± SD, Days)",
"中文": "注射治疗时长(平均值±标准差,天)"
},
{
"English": "None",
"中文": "无"
},
{
"English": "Drug Group",
"中文": "药品组"
},
{
"English": "Drug Naive pts",
"中文": "初诊患者"
},
{
"English": "Patient Type",
"中文": "就诊类型"
},
{
"English": "Injectable uncontrolled pts",
"中文": "注射未受控患者"
},
{
"English": "Initial Regimen of Drug Naive Patients",
"中文": "初诊患者初治方案"
},
{
"English": "Initial Regimen of Revisit Patients",
"中文": "非初诊患者初治方案"
},
{
"English": "Sex",
"中文": "性别"
},
{
"English": "Number of Patients HbA1c Tested over Baseline",
"中文": "糖化血红蛋白基线检查人数"
},
{
"English": "Number of Patients FPG Tested over Baseline",
"中文": "空腹血糖基线检查人数"
},
{
"English": "Number of Patients P2hPG Tested over Baseline",
"中文": "葡萄糖负荷2小时血糖基线检查人数"
},
{
"English": "Revisit Uncontrolled",
"中文": "初诊未受控"
},
{
"English": "Time to Non-persistence\n(Mean ± SD, days)",
"中文": "首次断药持续时间\n(平均值 ± 标准差, 天)"
},
{
"English": "Time to New Regimen after First Drop\n(Mean ± SD, days)",
"中文": "再次处方时间\n(平均值 ± 标准差, 天)"
},
{
"English": "Switch Regimen after Uncontrolled",
"中文": "未受控后切换方案"
},
{
"English": "OAD after Uncontrolled",
"中文": "未受控后选择口服降糖药"
},
{
"English": "Injectable after Uncontrolled",
"中文": "未受控后选择注射降糖药"
},
{
"English": "Regimen after Uncontrolled",
"中文": "未受控后方案"
},
{
"English": "Characteristics",
"中文": "特征"
},
{
"English": "Time to restart injection",
"中文": "再次启动注射治疗的时间"
},
{
"English": "Choose short-term intensive insulin therapy",
"中文": "选择短期胰岛素强化治疗方案药物的患者"
},
{
"English": "Choose short-term intensive insulin therapy when restart injection",
"中文": "再次启动注射治疗室选择短期胰岛素强化治疗方案药物的患者比例"
},
{
"English": "Regimen after First Short-Term Intensive Insulin Therapy",
"中文": "首次短期胰岛素强化治疗后的降糖方案"
},
{
"English": "Frequency of C-Peptide Abnormal over Follow-Up Period",
"中文": "各季度C肽异常次数"
},
{
"English": "Numbers of Visits",
"中文": "随访次数"
},
{
"English": "Number of Patients C-Peptide Abnormal over Follow-Up Period",
"中文": "各季度C肽异常人数"
},
{
"English": "Proportion of C-Peptide Abnormal over Follow-Up Period",
"中文": "各季度C肽异常比例"
},
{
"English": "Number of Patients C-Peptide Tested over Follow-Up Period",
"中文": "各季度C肽检查人数"
},
{
"English": "Frequency of Diastolic ≥90 mmHg and Systolic ≥140 mmHg over Follow-Up Period",
"中文": "各季度收缩压 ≥140 mmHg同时舒张压 ≥90 mmHg次数"
},
{
"English": "Frequency of Diastolic≥90 mmHg and Systolic ≥140 mmHg",
"中文": "收缩压 ≥140 mmHg同时舒张压 ≥90 mmHg次数"
},
{
"English": "Number of Patients Diastolic ≥90 mmHg and Systolic ≥140 mmHg over Follow-Up Period",
"中文": "各季度收缩压 ≥140 mmHg同时舒张压 ≥90 mmHg人数"
},
{
"English": "Proportion of Diastolic ≥90mmHg and Systolic ≥140 mmHg over Follow-Up Period",
"中文": "各季度收缩压 ≥140 mmHg同时舒张压 ≥90 mmHg比例"
},
{
"English": "Diastolic ≥90mmHg and Systolic ≥140 mmHg.",
"中文": "收缩压 ≥140 mmHg并且舒张压 ≥90 mmHg。"
},
{
"English": "Number of Patients Blood Pressure Tested over Follow-Up Period",
"中文": "各季度血压检查人数"
},
{
"English": "Error Bar",
"中文": "误差线"
},
{
"English": "Follow-Up Quarter:",
"中文": "随访季度:"
},
{
"English": "Number of Patients HbA1c Tested over Follow-Up Period",
"中文": "各季度糖化血红蛋白检查人数"
},
{
"English": "Number of Patients FPG Tested over Follow-Up Period",
"中文": "各季度空腹血糖检查人数"
},
{
"English": "Number of Patients P2hPG Tested over Follow-Up Period",
"中文": "各季度葡萄糖负荷2小时血糖检查人数"
},
{
"English": "Quarters",
"中文": "季度"
},
{
"English": "Hypoglycemia",
"中文": "低血糖"
},
{
"English": "Severe Hypoglycemia",
"中文": "严重低血糖"
},
{
"English": "Patients Hypoglycemia Distribution over Follow-Up Period",
"中文": "患者各季度低血糖分布"
},
{
"English": "Mean HbA1c of Patients after Regimen Switch over Follow-Up Period",
"中文": "方案切换后各季度糖化血红蛋白平均值"
},
{
"English": "Number of Patients HbA1c Tested after Regimen Switch over Follow-Up Period",
"中文": "方案切换后各季度糖化血红蛋白检查次数"
},
{
"English": "Mean FPG of Patients after Regimen Switch over Follow-Up Period",
"中文": "方案切换后各季度空腹血糖平均值"
},
{
"English": "Number of Patients FPG Tested after Regimen Switch over Follow-Up Period",
"中文": "方案切换后各季度空腹血糖检查次数"
},
{
"English": "Mean P2hPG of Patients after Regimen Switch over Follow-Up Period",
"中文": "方案切换后各季度葡萄糖负荷2小时血糖平均值"
},
{
"English": "Number of Patients P2hPG Tested after Regimen Switch over Follow-Up Period",
"中文": "方案切换后各季度葡萄糖负荷2小时血糖检查次数"
},
{
"English": "Patients Hypoglycemia Distribution after Regimen Switch",
"中文": "方案切换后患者各季度低血糖分布"
},
{
"English": "FPG <7.0 mmol/L of Patients over Follow-Up Period",
"中文": "各季度空腹血糖 <7.0 mmol/L 患者"
},
{
"English": "HbA1c <7.0 % of Patients over Follow-Up Period",
"中文": "各季度糖化血红蛋白 <7.0 % 患者"
},
{
"English": "Number of No Drug Patients HbA1c Tested over Follow-Up Period",
"中文": "各季度未用药患者糖化血红蛋白检查人数"
},
{
"English": "Mean FPG of No Drug Patients over Follow-Up Period",
"中文": "各季度未用药患者空腹血糖平均值"
},
{
"English": "Number of No Drug Patients FPG Tested over Follow-Up Period",
"中文": "各季度未用药患者空腹血糖检查人数"
},
{
"English": "Number of No Drug Patients P2hPG Tested over Follow-Up Period",
"中文": "各季度未用药患者葡萄糖负荷2小时血糖检查人数"
},
{
"English": "Number of Patients with Diabetes Symptoms over Follow-Up Period",
"中文": "各季度发生糖尿病症状患者"
},
{
"English": "Choose OAD after OAD Uncontrolled",
"中文": "口服后续选择口服"
},
{
"English": "Choose OAD after Injectable Uncontrolled",
"中文": "注射后续选择口服"
},
{
"English": "Choose Injectable after OAD Uncontrolled",
"中文": "注射后续选择口服"
},
{
"English": "Choose Injectable after Injectable Uncontrolled",
"中文": "注射后续选择注射"
},
{
"English": "Drug Naive Uncontrolled",
"中文": "初诊未受控"
},
{
"English": "Drug Naive OAD",
"中文": "初诊口服"
},
{
"English": "Drug Naive Injectable",
"中文": "初诊注射"
},
{
"English": "Revisit OAD",
"中文": "非初诊口服"
},
{
"English": "Revisit Injectable",
"中文": "非初诊注射"
},
{
"English": "Follow-Up",
"中文": "随访"
},
{
"English": "Population",
"中文": "总人群"
},
{
"English": "Age",
"中文": "年龄"
},
{
"English": "N (%)",
"中文": "人数(比例)"
},
{
"English": "Key Indicators",
"中文": "关键指标"
},
{
"English": "Endpoints",
"中文": "治疗结局"
},
{
"English": "Subgroup",
"中文": "亚组"
},
{
"English": "Choose subgroup",
"中文": "选择亚组"
},
{
"English": "First Regimen",
"中文": "初治方案"
},
{
"English": "Regimen Switch",
"中文": "方案切换"
},
{
"English": "C-Peptide",
"中文": "C肽"
},
{
"English": "Hypertension",
"中文": "高血压"
},
{
"English": "Mean HbA1c after Regimen Switch over Follow-Up Period",
"中文": "方案切换后HbA1c水平均值每季度变化"
},
{
"English": "Mean FPG after Regimen Switch over Follow-Up Period",
"中文": "方案切换后空腹血糖水平均值每季度变化"
},
{
"English": "Mean P2hPG after Regimen Switch over Follow-Up Period",
"中文": "方案切换后葡萄糖负荷2小时血糖水平均值每季度变化"
},
{
"English": "Hypoglycemia after Regimen Switch over Follow-Up Period",
"中文": "方案切换后低血糖事件每季度变化"
},
{
"English": "Report Save",
"中文": "导出报告"
},
{
"English": "Generating your report...",
"中文": "正在生成您的报告"
},
{
"English": "Saving...",
"中文": "正在保存..."
},
{
"English": "Your report is being generated. Please wait.",
"中文": "您的报告正在生成,请等待。"
},
{
"English": "Injectable",
"中文": "注射"
},
{
"English": "HbA1c Tested over Baseline",
"中文": "糖化血红蛋白基线检查人数"
},
{
"English": "HbA1c Result over Baseline",
"中文": "糖化血红蛋白基线检查结果"
},
{
"English": "HbA1c Table over Baseline",
"中文": "糖化血红蛋白基线检查表"
},
{
"English": "FPG Tested over Baseline",
"中文": "空腹血糖基线检查人数"
},
{
"English": "FPG Result over Baseline",
"中文": "空腹血糖基线检查结果"
},
{
"English": "FPG Table over Baseline",
"中文": "空腹血糖基线检查表"
},
{
"English": "P2hPG Tested over Baseline",
"中文": "葡萄糖负荷2小时血糖基线检查人数"
},
{
"English": "P2hPG Result over Baseline",
"中文": "葡萄糖负荷2小时血糖基线检查结果"
},
{
"English": "P2hPG Table over Baseline",
"中文": "葡萄糖负荷2小时血糖基线检查表"
},
{
"English": "Number of Patients with C-Peptide Abnormal",
"中文": "C肽异常人数"
},
{
"English": "Number of Patients with Diastolic ≥90 mmHg and Systolic ≥140 mmHg",
"中文": "收缩压 ≥140 mmHg同时舒张压 ≥90 mmHg人数"
},
{
"English": "Regimen after First SIIT",
"中文": "首次短期胰岛素强化治疗后的降糖方案选择"
},
{
"English": "Number of Patients with ≥2 Abnormal Result",
"中文": "异常结果≥2次患者人数"
},
{
"English": "Uncontrolled Patient Flow",
"中文": "未受控患者流向"
},
{
"English": "Mean HbA1c of OAD Patients over Follow-Up Period",
"中文": "口服患者糖化血红蛋白水平均值每季度变化"
},
{
"English": "Mean HbA1c of Injectable Patients over Follow-Up Period",
"中文": "注射患者糖化血红蛋白水平均值每季度变化"
},
{
"English": "Mean HbA1c of No Drug Patients over Follow-Up Period",
"中文": "未用药患者糖化血红蛋白水平均值每季度变化"
},
{
"English": "Number of Patients of HbA1c <7.0 % over Follow-Up Period",
"中文": "各季度时间段糖化血红蛋白 <7.0 %人数"
},
{
"English": "Proportion of Patients of HbA1c <7.0 % over Follow-Up Period",
"中文": "各季度时间段糖化血红蛋白 <7.0 %比例"
},
{
"English": "Number of Patients of HbA1c <7.0 % over Follow-Up Period",
"中文": "各季度时间段糖化血红蛋白 <7.0 %人数"
},
{
"English": "Proportion of Patients of HbA1c <7.0 % over Follow-Up Period",
"中文": "各季度时间段糖化血红蛋白 <7.0 %比例"
},
{
"English": "Mean FPG of OAD Patients over Follow-Up Period",
"中文": "口服患者空腹血糖水平均值每季度变化"
},
{
"English": "Mean FPG of Injectable Patients over Follow-Up Period",
"中文": "注射患者空腹血糖水平均值每季度变化"
},
{
"English": "Number of Patients of FPG <7.0 mmol/L over Follow-Up Period",
"中文": "各季度时间段空腹血糖 <7.0 mmol/L人数"
},
{
"English": "Proportion of Patients of FPG <7.0 mmol/L over Follow-Up Period",
"中文": "各季度时间段空腹血糖 <7.0mmol/L比例"
},
{
"English": "Number of Patients of FPG <7.0 mmol/L over Follow-Up Period",
"中文": "各季度时间段空腹血糖 <7.0 mmol/L人数"
},
{
"English": "Proportion of Patients of FPG <7.0 mmol/L over Follow-Up Period",
"中文": "各季度时间段空腹血糖 <7.0mmol/L比例"
},
{
"English": "Mean P2hPG of OAD Patients over Follow-Up Period",
"中文": "口服患者葡萄糖负荷2小时血糖水平均值每季度变化"
},
{
"English": "Mean P2hPG of Injectable Patients over Follow-Up Period",
"中文": "注射患者葡萄糖负荷2小时血糖水平均值每季度变化"
},
{
"English": "Mean P2hPG of No Drug Patients over Follow-Up Period",
"中文": "未用药患者葡萄糖负荷2小时血糖水平均值每季度变化"
},
{
"English": "Hypoglycemia Distribution of OAD Patients over Follow-Up Period",
"中文": "各季度口服患者低血糖事件分布"
},
{
"English": "Hypoglycemia Distribution of Injectable Patients over Follow-Up Period",
"中文": "各季度注射患者低血糖事件分布"
},
{
"English": "Hypoglycemia Distribution after Regimen Switch over Follow-Up Period",
"中文": "换药后各季度患者低血糖事件分布"
},
{
"English": "Patients Complication Distribution",
"中文": "患者并发症情况分布"
},
{
"English": "Bolus",
"中文": "餐时"
},
{
"English": "Tested",
"中文": "检查"
},
{
"English": "Untested",
"中文": "未检查"
},
{
"English": "Follow-Up Time (Quarters)",
"中文": "随访时间(季度)"
},
{
"English": "OAD Uncontrolled-Reorder OAD",
"中文": "口服未受控-后续选择口服"
},
{
"English": "OAD Uncontrolled-Reorder Injectable",
"中文": "口服未受控-后续选择注射"
},
{
"English": "Injectable Uncontrolled-Reorder OAD",
"中文": "注射未受控-后续选择口服"
},
{
"English": "Injectable Uncontrolled-Reorder Injectable",
"中文": "注射未受控-后续选择注射"
},
{
"English": "Frequency of C-Peptide Abnormal",
"中文": "C肽异常次数"
},
{
"English": "Proportion of C-Peptide Abnormal",
"中文": "C肽异常比例"
},
{
"English": "Proportion of Patients",
"中文": "患者比例"
},
{
"English": "LLA",
"中文": "下肢截肢"
},
{
"English": "Blood Pressure",
"中文": "血压"
},
{
"English": "Comorbidity over Baseline",
"中文": "基线期共病情况"
},
{
"English": "Comorbidity over Follow-Up",
"中文": "随访期共病情况"
},
{
"English": "Complication over Baseline",
"中文": "基线期并发症情况"
},
{
"English": "Complication over Follow-Up",
"中文": "随访期并发症情况"
},
{
"English": "Proportion and Time of Reinitiate SIIT",
"中文": "再次启动注射治疗的患者比例和时间"
},
{
"English": "Frequency of Hypertension",
"中文": "高血压次数"
},
{
"English": "Number of Patients with Hypertension",
"中文": "高血压人数"
},
{
"English": "Proportion of Hypertension",
"中文": "高血压比例"
},
{
"English": "The comorbidity baseline is the latest visit record within the baseline period.",
"中文": "共病基线期为基线日内最近一次就诊记录。"
},
{
"English": "The complication baseline is the visit record within 3 months before the index date.",
"中文": "并发症基线期定义为基线日前3月内的就诊记录。"
},
{
"English": "Uncontrolled was defined as HbA1c ≥7 % at the last test before discontinuation of the patient's initial treatment regimen.",
"中文": "未受控定义为患者初治治疗方案发生断药前的末次检查糖化血红蛋白值 ≥7 %"
},
{
"English": "Normal HbA1c: [4, 7), Diabetes HbA1c: 7.0 or higher. (%)",
"中文": "血糖正常时糖化血红蛋白水平为: [4, 7), 糖尿病患者糖化血红蛋白水平 ≥7.0。(单位: %)"
},
{
"English": "Normal FPG: [3.9, 6.0), Prediabetes FPG: [6.0, 7.0), Diabetes FPG: 7.0 or higher. (mmol/L)",
"中文": "血糖正常时空腹血糖值为: [3.9, 6.0), 血糖偏高时空腹血糖值为: [6.0, 7.0), 糖尿病时空腹血糖值 ≥7.0。(单位: mmol/L)"
},
{
"English": "Normal P2hPG: 7.8 or lower, Prediabetes P2hPG: [7.8, 11.1), Diabetes P2hPG: 11.1 or higher. (mmol/L)",
"中文": "血糖正常时葡萄糖负荷2小时血糖值7.8以下, 血糖偏高时葡萄糖负荷2小时血糖值为: [7.8, 11.1), 糖尿病时葡萄糖负荷2小时血糖 ≥11.1。(单位: mmol/L)"
},
{
"English": "The fasting C-peptide high than 1.0 nmol/L, the C-peptide value 2 hours after a meal high than 2.5 nmol/L. C-peptide increases more than 5 times 1-2 hours after a meal, and C-peptide did not return to normal fasting levels 3 hours after a meal.",
"中文": "空腹C肽值 1.0 nmol/L以上, 餐后2小时C肽值 2.5 nmol/L以上, 餐后1-2小时增加大于5倍, 餐后3小时未恢复到空腹水平正常值。"
},
{
"English": "0 is baseline, three months before index date to the index date is defined as the baseline period.",
"中文": "0为基线期,入组前三个月至入组日的时间窗定义为基线期。"
},
{
"English": "The number of patients with any of the following test results ≥2 times was counted: fasting blood glucose ≥7.0 mmol/L, glucose load 2-hour blood glucose ≥11.1 mmol/L, HbA1c ≥7 %.",
"中文": "统计有 ≥2 次下述任意一种检查结果的患者人数:空腹血糖 ≥7.0 mmol/L,葡萄糖负荷2小时血糖 ≥11.1 mmol/L,HbA1c ≥7 %。"
},
{
"English": "Hypoglycemic events: FPG lower than 3.9 mmol/L, severe hypoglycemic events: FPG lower than 3.0 mmol/L or hospitalization due to hypoglycemia.",
"中文": "低血糖事件:FPG 3.9 mmol/L以下,严重低血糖事件:FPG 3.0 mmol/L以下或因低血糖住院。"
},
{
"English": "The amount of data is too small to display",
"中文": "数据量太少无法展示"
}
]
}
\ No newline at end of file
from configparser import RawConfigParser
import os
from datetime import datetime
import logging
def get_config_value(section, config_key, conf_path):
"""
读取配置文件,获取部分的配置值,没有返回空
:param section: 配置文件部分名 例如:[file_paths]
:param config_key: 配置key 例如:cdm_path
:param conf_path: 配置文件路径
:return: config_value 配置value
"""
config = RawConfigParser()
config.optionxform = str # 禁用键的小写转换
config.read(conf_path, encoding='utf-8')
if not config.has_option(section, config_key):
return None
config_value = eval(config.get(section, config_key))
return config_value
conf_path = '../../config/transform/standardisation_config.ini' # 标准化配置文件
dir_base = get_config_value('file_paths', 'data_dir_base', conf_path) # 数据基础目录
rawdata_dir = get_config_value('file_paths', 'rawdata_dir', conf_path) # 读取配置
cleandata_dir = get_config_value('file_paths', 'cleandata_dir', conf_path)
dictdata_dir = get_config_value('file_paths', 'dictdata_dir', conf_path)
check_standard_dir = get_config_value('file_paths', 'check_standard_dir', conf_path)
log_dir = get_config_value('file_paths', 'log_dir', conf_path)
source_file_format = get_config_value('file_format', 'source_file_format', conf_path)
target_file_format = get_config_value('file_format', 'target_file_format', conf_path)
dict_file_format = get_config_value('file_format', 'dict_file_format', conf_path)
top_num = get_config_value('check_standard', 'top_num', conf_path)
std_table_names = get_config_value('std_table_names', 'std_table_names', conf_path)
table_names = get_config_value('table_names', 'table_names', conf_path)
rawdata_path = os.path.join(dir_base,rawdata_dir)
cleandata_path = os.path.join(dir_base,cleandata_dir)
dictdata_path = os.path.join(dir_base,dictdata_dir)
check_standard_path = os.path.join(dir_base,check_standard_dir)
current_date = datetime.now().strftime('%Y%m%d')
log_file_path = os.path.join(dir_base,log_dir,f'log_{current_date}.log') # 日志文件路径
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
# 使用utf-8编码打开日志文件
logger = logging.getLogger()
handler = logging.FileHandler(log_file_path, 'a', encoding='utf-8')
handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
logger.addHandler(handler)
# print(data_dir_base)
# print(rawdata_path)
# print(cleandata_path)
from standard import data_standard,check_standard
import warnings
warnings.filterwarnings("ignore")
from constant import *
from utils import filter_and_copy_files,file_format_unification
def data_governance(conf_path):
"""
数据治理总调度程序
- 患者融合 pass
- 数据清洗 pass
- 数据标化
- 数据质量检查 pass
- 纳排代码
:return: 治理后的dataframe
"""
# 1.文件格式统一,csv文件转为parquet
# file_format_unification(rawdata_path)
# 2.数据标化
for std_table_name in std_table_names:
data_standard(conf_path, std_table_name)
# 3.检查标化后数据,取top前几的标化数据,便于后续观察数据是否被更新
check_standard(conf_path)
# 4.未标化文件放到final_cdm目录
filter_and_copy_files(table_names, std_table_names, rawdata_path, cleandata_path)
# 数据治理
data_governance(conf_path)
pandas==2.0.3
pyarrow==14.0.2
from utils import write_dataframe_to_file,read_file_to_dataframe,check_data
from constant import *
import re
import os
import pandas as pd
import duckdb
def df_filter_condition(df, columns, filter_style, regex_patterns):
"""
生成df过滤条件
:param df: dataframe
:param columns: 列名是字符串或是一个列表
:param filter_style: 逻辑条件
:param regex_patterns: 正则表达式列表
:return: filter_condition 返回df过滤条件
"""
# 如果列是字符串,则将列转换为列表
if isinstance(columns, str):
columns = [columns] * len(regex_patterns)
# 初始条件,不过滤任何行
filter_condition = pd.Series(True, index=df.index)
# 解析filter_style字符串并应用逻辑条件
for i, column in enumerate(columns):
# 检查列的数据类型,如果是数值类型(int 或 float),则转换为字符串类型
if pd.api.types.is_numeric_dtype(df[column]):
df[column] = df[column].astype(str)
filter_style = filter_style.replace(f'{{{i+1}}}', f'df["{column}"].str.contains("{regex_patterns[i]}", flags=re.I)')
# print(filter_style)
# print(filter_style.replace('and', '&').replace('or', '|').replace('not ', '~'))
# 使用eval函数计算filter_style
filter_condition &= eval(filter_style.replace('and', '&').replace('or', '|').replace('not ', '~'))
return filter_condition
def condition_based_standardization(df, std_col_name, condition_dict):
"""
基于条件的标准化
:param df: 待标化的dataframe
:param std_col_name: 标化列名
:param condition_dict: 标化字典
:return: standard_df 标化后dataframe
"""
if condition_dict is None or std_col_name is None:
return df
# 遍历条件字典
for std_name, condition in condition_dict.items():
col_names = condition[0] #列名是字符串或是一个列表
filter_style = condition[1] # 逻辑条件
regex_patterns = condition[2] # 正则表达式列表
# df过滤条件
filter_condition = df_filter_condition(df, col_names, filter_style, regex_patterns)
# 使用loc函数,通过df过滤条件,新增std_col_name标化列,如果标化列数据不为空则在尾部添加逗号和标化分类名称否则返回标化分类名称(例子:一条诊断数据“糖尿病、高血压” 最终标化结果:糖尿病,高血压)
df.loc[filter_condition, std_col_name] = df.loc[filter_condition, std_col_name].apply(lambda x: f'{x},{std_name}' if pd.notnull(x) else std_name)
# 标化后的dataframe
standard_df = df
return standard_df
def create_directory_if_not_exists(directory):
if not os.path.exists(directory):
os.makedirs(directory)
print(f"目录 '{directory}' 不存在,已创建成功!")
else:
# print(f"目录 '{directory}' 已存在。")
pass
def check_standard(conf_path):
"""
已标化的表
- 取top前几的标化,写到xx格式文件,便于后续程序检查原始标化列数据是否更新
- 标化列不为空,根据原始标化列和标化列分组,统计频次,降序排序取前N条数据写到xx格式文件
:param conf_path: 配置文件路径
:return: 无返回值
"""
if std_table_names is not None:
for std_table_name in std_table_names:
input_file_path = os.path.join(cleandata_path, std_table_name + target_file_format)
standard_df = read_file_to_dataframe(input_file_path, target_file_format)
raw_col_name = get_config_value(std_table_name, 'raw_col_name', conf_path)
for col_name in raw_col_name:
std_col_name = 'std_' + col_name
##### 检查关键变量是否更新 例如:dx_desc #####
grouped_df = standard_df.groupby(col_name).size().nlargest(top_num).reset_index(name='cunt')
merged_df = grouped_df.merge(standard_df, on=col_name, how='left')
drop_duplicate_df = merged_df[[col_name,std_col_name,'cunt']].drop_duplicates()
top_num_df = drop_duplicate_df
output_file_path = os.path.join(check_standard_path,'top' + str(top_num) + '_' + col_name + target_file_format)
write_dataframe_to_file(top_num_df, output_file_path, target_file_format)
##### 检查标化内容 #####
# 标化列不为空,根据原始标化列和标化列分组,统计频次,降序排序取前N条数据写到xx格式文件
filtered_df = standard_df[standard_df[std_col_name].notnull()]
grouped_df = filtered_df.groupby([col_name,std_col_name]).size().reset_index(name='cunt')
sorted_df = grouped_df.sort_values(by='cunt', ascending=False)
top_n_df = sorted_df.head(top_num)
check_std_df = top_n_df
output_file_path = os.path.join(check_standard_path,'check_std_' + col_name + target_file_format)
write_dataframe_to_file(check_std_df, output_file_path, target_file_format)
def data_standard_bak(conf_path, section):
"""
数据标准化
:param conf_path: 配置文件路径
:param section: 配置文件部分名 例如:[file_paths]
:return: standard_df
"""
# 读取配置
table_name = section
raw_col_name = get_config_value(table_name, 'raw_col_name', conf_path)
std_col_name = get_config_value(table_name, 'std_col_name', conf_path)
std_main_class = get_config_value(table_name, 'std_main_class', conf_path)
subclass_dict = get_config_value(table_name, 'subclass_dict', conf_path)
mainclass_dict = get_config_value(table_name, 'mainclass_dict', conf_path)
# 获取原始dataframe
input_file_path = os.path.join(rawdata_path, table_name + source_file_format)
raw_df = read_file_to_dataframe(input_file_path, source_file_format)
# 列名改为原始列名
if std_col_name is not None or raw_col_name is not None:
raw_df.rename(columns={std_col_name: raw_col_name}, inplace=True)
raw_df[std_col_name] = None
if std_main_class is not None:
raw_df[std_main_class] = None
# 基于条件的标准化
subclass_df = condition_based_standardization(raw_df, std_col_name, subclass_dict) # 小类标化
mainclass_df = condition_based_standardization(subclass_df, std_main_class, mainclass_dict) # 大类标化
standard_df = mainclass_df
# 写数据
output_file_path = os.path.join(cleandata_path, table_name + target_file_format)
write_dataframe_to_file(standard_df, output_file_path, target_file_format)
# 返回数据标化是否成功
check_data(standard_df, table_name,'数据标化')
return standard_df
def process_data(dataframes, table_name, create_table_sql, create_dict_table_sql, stand_sql,update_sql, dict_name_lst):
"""
根据传入的 SQL 语句处理数据并返回查询结果。
参数:
dataframes: 关联时用到的dataframe字典。
table_name: 表名。
create_table_sql (str): 创建基础表的 SQL 语句。
create_dict_table_sql (str): 创建字典表的 SQL 语句。
stand_sql (str): 标化的 SQL 语句。
update_sql (str): 更新数据的 SQL 语句。
dict_name_lst:字典表名称列表
返回:
标化好的数据和字典数据
"""
# 创建 DuckDB 连接
con = duckdb.connect(database=':memory:')
# 动态创建DataFrame变量名
for join_table_name, df in dataframes.items():
globals()[join_table_name] = df
# 创建基础表
con.execute(create_table_sql)
# 创建字典表
con.execute(create_dict_table_sql)
# 标化数据
con.execute(stand_sql)
# 更新数据
con.execute(update_sql)
# 标化后的数据写到指定目录
standard_df = con.execute(f"SELECT * FROM {table_name}").fetchdf()
# 基础表写数据
output_file_path = os.path.join(cleandata_path, table_name + target_file_format)
write_dataframe_to_file(standard_df, output_file_path, target_file_format)
# 标化后的字典数据写到指定目录
for dict_name in dict_name_lst:
dict_df = con.execute(f"SELECT * FROM {dict_name}").fetchdf()
output_file_path = os.path.join(dictdata_path, dict_name + dict_file_format)
write_dataframe_to_file(dict_df, output_file_path, dict_file_format)
# 关闭连接
con.close()
return standard_df
def data_standard(conf_path, section):
"""
数据标准化
:param conf_path: 配置文件路径
:param section: 配置文件部分名 例如:[file_paths]
:return: standard_df
"""
# 读取配置
table_name = section
join_table_name_lst = get_config_value(table_name, 'join_table_name_lst', conf_path)
dict_name_lst = get_config_value(table_name, 'dict_name_lst', conf_path)
create_table_sql = get_config_value(table_name, 'create_table_sql', conf_path)
create_dict_table_sql = get_config_value(table_name, 'create_dict_table_sql', conf_path)
stand_sql = get_config_value(table_name, 'stand_sql', conf_path)
update_sql = get_config_value(table_name, 'update_sql', conf_path)
# 定义一个字典来存储 DataFrame
dataframes = {}
# 遍历表名列表,读取文件并存储 DataFrame
for join_table_name in join_table_name_lst:
input_file_path = os.path.join(rawdata_path, join_table_name + source_file_format)
join_table_df = read_file_to_dataframe(input_file_path, source_file_format)
dataframes[join_table_name] = join_table_df
standard_df = process_data(dataframes, table_name,create_table_sql, create_dict_table_sql, stand_sql,update_sql,dict_name_lst)
# 返回数据标化是否成功
check_data(standard_df, table_name,'数据标化')
return standard_df
\ No newline at end of file
import pyarrow as pa
import pyarrow.parquet as pq
import pandas as pd
import shutil
from constant import *
def check_data(df, table_name,log_description):
"""
检查df是否有数据,有返回成功;否则返回失败
"""
if not df.empty: # 检查DataFrame是否为空
logging.info('{}表,{}成功;'.format(table_name,log_description))
else:
logging.info('{}表,{}失败;'.format(table_name,log_description))
def file_format_unification(dir_path):
"""
文件格式统一,csv文件转为parquet
:param dir_path: 目录路径
:return: 无
"""
# 目录名拼接医院名称缩写
dir_path = os.path.join(dir_path)
# 遍历目录下所有文件
for root, dirs, files in os.walk(dir_path):
for file_name in files:
if file_name.endswith('.csv'):
file_path = os.path.join(root, file_name)
# 读取csv文件
df = pd.read_csv(file_path, encoding='utf-8')
table = pa.Table.from_pandas(df)
# 将文件路径中的'.csv'替换为'.parquet'
parquet_file_path = file_path.replace('.csv', '.parquet')
# parquet写文件
pq.write_table(table, parquet_file_path)
# csv文件如果存在就删除
if os.path.isfile(file_path):
os.remove(file_path)
def filter_and_copy_files(table_names, std_table_names, source_dir, target_dir):
"""
未标化清洗等处理的文件,复制一份到最终的cdm目录
:param table_names: 所有表的表名列表
:param std_table_names: 标化表名列表
:param source_dir: 来源目录
:param target_dir: 目标目录
:return: 无
"""
# 遍历表名,不包括标准化表名
filter_tables = list(set(table_names) - set(std_table_names))
for table_name in filter_tables:
source_file = os.path.join(source_dir, table_name + source_file_format)
target_file = os.path.join(target_dir, table_name + target_file_format)
if not os.path.exists(source_file):
continue
# 检查源格式和目标格式是否相同
if source_file_format == target_file_format:
shutil.copy(source_file, target_file)
else:
# 转换文件格式并写到目标目录
source_df = read_file_to_dataframe(source_file, source_file_format)
write_dataframe_to_file(source_df, target_file, target_file_format)
def get_config_value_bak(section, config_key, conf_path, center=None):
"""
读取配置文件,获取部分的配置值,没有返回空
:param section: 配置文件部分名 例如:[file_paths]
:param config_key: 配置key 例如:cdm_path
:param conf_path: 配置文件路径
:param center: 医院名称缩写
:return: config_value 配置value 例如:'./generate_cdm/{}/cdm/'
"""
config = RawConfigParser()
config.optionxform = str # 禁用键的小写转换
config.read(conf_path, encoding='utf-8')
if not config.has_option(section, config_key):
return None
config_value = eval(config.get(section, config_key))
if center is not None:
config_value = config_value.format(center)
return config_value
def read_file_to_dataframe(input_file_path, file_format):
"""
根据文件格式读取数据,转为DataFrame
:param input_file_path: 输入文件路径
:param file_format: 文件格式
:return: DataFrame
"""
if file_format == '.csv':
df = pd.read_csv(input_file_path, encoding='utf-8')
elif file_format == '.parquet':
table = pq.read_table(input_file_path)
df = table.to_pandas()
else:
logging.error("Unsupported file format")
return df
def write_dataframe_to_file(df, file_path, file_format):
"""
处理 DataFrame 写入不同格式的文件
:param df: DataFrame
:param file_path: 文件路径
:param file_format: 文件格式
:return: 无返回值
"""
if file_format == '.csv':
df.to_csv(file_path, index=False,encoding='utf-8-sig')
elif file_format == '.parquet':
table = pa.Table.from_pandas(df)
pq.write_table(table, file_path)
else:
logging.error("Unsupported file format")
def df_lookup_mult(df, query_columns, query_values, return_columns):
# 确保 query_columns 和 query_values 是列表
if isinstance(query_columns, (str, int)):
query_columns = [query_columns]
# 确保 query_columns 和 query_values 是列表
if not isinstance(query_values, (list, tuple)):
query_values = [query_values]
# 确保所有值都转换为字符串进行比较
query = (df[query_columns[0]].astype(str) == str(query_values[0]))
# 如果有多个查询条件,进行组合
for col, val in zip(query_columns[1:], query_values[1:]):
query &= (df[col].astype(str) == str(val))
filtered_df = df[query]
# 返回结果
if not filtered_df.empty:
if isinstance(return_columns, (list, tuple)):
return filtered_df[return_columns].iloc[0]
else:
return filtered_df[return_columns].iloc[0]
else:
return None
\ No newline at end of file
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
#* * * * * root echo "Hello from the crontab" >> /var/log/cron.log 2>&1
* * * * * root echo `date` >> /var/log/cron.log 2>&1
# This file was generated by Pentaho Data Integration version 6.0.1.0-386.
#
# Here are a few examples of variables to set:
#
# PRODUCTION_SERVER = hercules
# TEST_SERVER = zeus
# DEVELOPMENT_SERVER = thor
#
# Note: lines like these with a # in front of it are comments
# 输出数据量(0:表示输出所有数据)
limit_size=0
# 数据输出路径
cdm_directory=/opt/in2-t2dm/data/rawdata/
#东台postgresql数据库连接
dt_postgresql_host=192.168.3.240
dt_postgresql_database=postgres
dt_postgresql_port=5433
dt_postgresql_user_name=postgres
#dt_postgresql_password=postgres
dt_postgresql_password=Encrypted 2be98afc86aa7f2e4bb16bd64d980aac9
#################################### 数据提取配置 ##############################################
# 机构id 传入None 则提权全部机构数据 需要传入列表
[pv_ids]
pv_ids = ['320106426090445', '320104466002630', '320106466000838']
# pv_ids = [None]
# 数据提取范围 需要传入开始时间和结束时间 需要传入列表
[date_ranges]
date_ranges = [["2021-01-01", "2021-07-01"]]
# date_ranges = [["2021-01-01", "2021-07-01"],["2021-07-01", "2022-01-01"],["2022-01-01", "2022-07-01"],["2022-07-01", "2023-01-01"],["2023-01-01", "2023-07-01"],["2023-07-01", "2024-01-01"],["2024-01-01", "2024-07-01"],["2024-07-01", "2024-10-01"]]
# 表格提取
[tables]
tables = ['patient', 'visit', 'prescribing', 'diagnosis', 'lab']
\ No newline at end of file
events {
worker_connections 1024;
}
http {
server {
listen 80;
location /shiny/ {
proxy_pass http://shiny:3838/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
# Instruct Shiny Server to run applications as the user "shiny"
run_as shiny;
# Define a server that listens on port 3838
server {
listen 3838;
# Define a location at the base URL
location / {
# Host the directory of Shiny Apps stored in this directory
# site_dir /srv/shiny-server;
site_dir /opt/in2-t2dm/codes/shiny;
# Log all Shiny output to files in this directory
#log_dir /var/log/shiny-server;
log_dir /opt/in2-t2dm/logs/shiny/;
# When a user visits the base URL rather than a particular application,
# an index of the applications available in this directory will be shown.
directory_index on;
}
}
#################################### 数据标准化配置 ##############################################
[table_names]
table_names = ['patient','visit','diagnosis','prescribing','lab_result_cm','vital']
[std_table_names]
std_table_names = ['diagnosis','prescribing','lab_result_cm']
# std_table_names = ['prescribing']
[file_paths]
# 数据基础目录
data_dir_base = '../../'
rawdata_dir = 'data/rawdata/'
cleandata_dir = 'data/cleandata/'
dictdata_dir = 'data/dictdata/'
check_standard_dir = 'data/cleandata/check_standard/'
log_dir = 'logs/transform/'
[file_format]
source_file_format = '.csv'
target_file_format = '.csv'
dict_file_format = '.csv'
[check_standard]
top_num = 500
######################## 标化配置说明 ########################
# 注意事项:
# 1.如果正则中包含反斜杠"\",则需要将其转义为"\\"。
# 2."~*"忽略大小写配置会报错,使用"(?i)"来替代。
##[表名]
#[prescribing]
##### 原始列名配置,检查标化用,如果多列标化可配置多个 #####
#raw_col_name = ['rx_desc']
##### 配置多表关联 #####
#join_table_name_lst = ['prescribing']
##### 导出字典表配置 #####
#dict_name_lst = ['rx_desc_dict']
##### 标化配置 #####
## 1.创建基表;新增标化列;
#create_table_sql = '建表sql;添加标化列sql;'
## 2.创建字典表;新增标化列;
#create_dict_table_sql = '建字典表sql;添加标化列sql;'
## 3.标化sql
#stand_sql = '''标化sql;'''
## 4.原始表关联字典表更新数据
#update_sql = '''更新sql;'''
[prescribing]
raw_col_name = ['rx_desc','frequency']
join_table_name_lst = ['prescribing']
dict_name_lst = ['rx_desc_dict','frequency_dict']
create_table_sql = 'CREATE TABLE prescribing as SELECT * FROM prescribing;ALTER TABLE prescribing ADD COLUMN std_rx_desc VARCHAR;ALTER TABLE prescribing ADD COLUMN std_frequency VARCHAR;'
create_dict_table_sql = 'CREATE TABLE rx_desc_dict as SELECT rx_desc,count(1) frequency FROM prescribing group by rx_desc order by frequency desc;ALTER TABLE rx_desc_dict ADD COLUMN std_rx_desc VARCHAR;CREATE TABLE frequency_dict as SELECT frequency,count(1) frequency FROM prescribing group by frequency order by frequency desc;ALTER TABLE frequency_dict ADD COLUMN std_frequency VARCHAR;'
update_sql = '''UPDATE prescribing SET std_rx_desc = rx_desc_dict.std_rx_desc FROM rx_desc_dict WHERE prescribing.rx_desc = rx_desc_dict.rx_desc AND rx_desc_dict.std_rx_desc IS NOT NULL; UPDATE prescribing SET std_frequency = frequency_dict.std_frequency FROM frequency_dict WHERE prescribing.frequency = frequency_dict.frequency AND frequency_dict.std_frequency IS NOT NULL;'''
stand_sql = '''UPDATE rx_desc_dict SET std_rx_desc = case when rx_desc ~ '(?i).*格列(本脲|吡嗪|喹酮|齐特|美脲|波脲).*|.*甲苯磺丁脲.*|.*氯磺丙脲.*|.*优降糖.*|.*达安疗.*|.*美吡达.*|.*瑞易宁.*|.*秦苏.*|.*迪沙.*|.*依吡达.*|.*优哒灵.*|.*元坦.*|.*麦林格.*|.*唐贝克.*|.*曼迪宝.*|.*美吡达.*|.*糖适平.*|.*捷适.*|.*达美康.*|.*弗莱因.*|.*弘旭阳.*|.*谐尔平.*|.*亚莫利.*|.*万苏平.*|.*佑苏.*|.*力贻苹.*|.*迪北.*|.*安多美.*|.*科德平.*|.*伊瑞.*|.*佳和洛.*|.*普仁平.*|.*克糖利.*' then '磺脲类' when rx_desc ~ '(?i).*(瑞|那|米)格列奈.*|.*诺和龙.*|.*弗来迪.*|.*唐力.*|.*唐瑞.*|.*贝加.*|.*快如妥.*' and rx_desc !~ '(?i).*二甲双胍.*|.*甲福明.*|.*格华止.*|.*奈达.*|.*泰白.*|.*至力.*|.*倍顺.*|.*麦克罗辛.*|.*麦特美.*|.*唐必呋.*|.*亿恒.*|.*仁欣.*|.*悦达宁.*|.*力乐尔.*|.*卜可.*|.*迪化唐锭.*|.*美迪康.*|.*君士达新.*|.*唐落.*|.*山姆士.*|.*君力达.*' then '格列奈类' when rx_desc ~ '(?i).*二甲双胍.*|.*甲福明.*|.*格华止.*|.*奈达.*|.*泰白.*|.*至力.*|.*倍顺.*|.*麦克罗辛.*|.*麦特美.*|.*唐必呋.*|.*亿恒.*|.*仁欣.*|.*悦达宁.*|.*力乐尔.*|.*卜可.*|.*迪化唐锭.*|.*美迪康.*|.*君士达新.*|.*唐落.*|.*山姆士.*|.*君力达.*' and rx_desc !~ '(?i).*吡嗪.*|.*吡格.*|.*本脲.*|.*注射.*|.*(西|沙|维|阿|利)格列汀.*|.*恩格列净.*|.*欧唐静.*' then '双胍类' when rx_desc ~ '(?i).*(罗|吡).*格列酮.*|.*文迪雅.*|.*奥洛华.*|.*爱能.*|.*太罗.*|.*维戈洛.*|.*宜力喜.*|.*圣敏.*|.*耐迪.*|.*安瑞宁.*|.*艾可拓.*|.*卡司平.*|.*顿灵.*|.*贝唐宁.*|.*佳普喜.*|.*安可妥.*|.*凯宝维元.*|.*艾汀.*|.*卡司平.*|.*瑞彤.*|.*列洛.*|.*夷友.*' and rx_desc !~ '(?i).*二甲双胍.*' then '噻唑烷二酮类' when rx_desc ~ '(?i).*(阿卡|伏格列)波糖.*|.*米格列醇.*|.*拜唐苹.*|.*卡博平.*|.*贝希.*|.*倍欣.*|.*华怡平.*|.*德赛天.*|.*米格尼醇.*|.*Glyset.*|.*奥恬苹.*|.*瑞舒.*' and rx_desc !~ '(?i).*他汀.*' then 'α-糖苷酶抑制剂类' when rx_desc ~ '(?i).*(西|沙|维|阿|利)格列汀.*' and rx_desc !~ '(?i).*二甲双胍.*|.*甲福明.*|.*格华止.*|.*奈达.*|.*泰白.*|.*至力.*|.*倍顺.*|.*麦克罗辛.*|.*麦特美.*|.*唐必呋.*|.*亿恒.*|.*仁欣.*|.*悦达宁.*|.*力乐尔.*|.*卜可.*|.*迪化唐锭.*|.*美迪康.*|.*君士达新.*|.*唐落.*|.*山姆士.*|.*君力达.*' then 'DPP4i' when rx_desc ~ '(?i).*(达|恩|卡)格列净.*|.*安达唐.*|.*欧唐静.*|.*怡可安.*' then 'SGLT2i' when (rx_desc ~ '(?i).*(西|沙|维|利)格列汀.*|.*恩格列净.*' and rx_desc ~ '(?i).*二甲双胍.*|.*甲福明.*|.*格华止.*|.*奈达.*|.*泰白.*|.*至力.*|.*倍顺.*|.*麦克罗辛.*|.*麦特美.*|.*唐必呋.*|.*亿恒.*|.*仁欣.*|.*悦达宁.*|.*力乐尔.*|.*卜可.*|.*迪化唐锭.*|.*美迪康.*|.*君士达新.*|.*唐落.*|.*山姆士.*|.*君力达.*') or rx_desc ~ '(?i).*欧双(宁|静).*|.*捷诺达.*|.*宜合瑞.*|.*安立格.*' then '复方制剂' when (rx_desc ~ '(?i).*胰岛素.*|.*(重和|万苏)林.*|.*甘舒霖.*|.*优泌(林|乐).*|.*NPH.*|.*艾倍得.*|.*糖德仕.*|.*来得时.*|.*优(思|乐)灵.*|.*(速|长)秀霖.*|.*诺和(锐|平|佳|达|灵).*' and rx_desc ~ '(?i).*精蛋白锌?重组人.*|.*低精蛋白重组人.*|.*低精蛋白锌.*|.*精蛋白生物合成人.*|.*甘精.*|.*地特.*|.*(徳|德)谷.*|.*(优泌林|重和林|甘舒霖|诺和灵).?N.*|.*诺和达.*|.*NPH.*|.*诺和平.*|.*来得时.*|.*长秀霖.*|.*优乐灵.*|.*糖德仕.*|.*中效.*|^精蛋白人胰岛素$') and rx_desc !~ '(?i).*注射器针头.*|.*注射针头.*|.*射器.*|.*针头.*|.*泵耗材.*|.*混合.*|.*30R.*|.*50R.*|.*70R.*|.*25R.*|.*预混.*|.*门冬.*|.*70\\/30.*|.*诺和灵R.*|^精蛋白锌重组人胰岛素注射液.*|.*\\(兴\\)精蛋白锌重组人胰岛素注射液.*|.*\\(N笔芯\\)精蛋白生物合成人胰岛素.*' then '基础胰岛素' when (rx_desc ~ '(?i).*胰岛素.*|.*(重和|万苏)林.*|.*甘舒霖.*|.*优泌(林|乐).*|.*NPH.*|.*艾倍得.*|.*糖德仕.*|.*来得时.*|.*优(思|乐)灵.*|.*(速|长)秀霖.*|.*诺和(锐|平|佳|达|灵).*' and rx_desc ~ '(?i).*25R.*|.*30R.*|.*30R.*|.*50R.*|.*30\\/70.*|.*预混.*|.*混(和|合).*|.*(甘舒霖|万苏林).?(30|40|50)R.*|.*诺和灵.?50R.*|.*优泌林.?70\\/30.*|.*重和林.?M30.*|.*优思灵.?(30\\/70).*|.*(优泌乐|诺和锐)(50|25|30).*|.*25.*|.*50.*|.*70.*|.*30注射液.*|.*胰岛素30.*') and rx_desc !~ '(?i).*注射器针头.*|.*注射针头.*|.*射器.*|.*针头.*|.*泵耗材.*|.*502181.*|.*甘精胰岛素.*' then '预混胰岛素' when (rx_desc ~ '(?i).*胰岛素.*|.*(重和|万苏)林.*|.*甘舒霖.*|.*优泌(林|乐).*|.*NPH.*|.*艾倍得.*|.*糖德仕.*|.*来得时.*|.*优(思|乐)灵.*|.*(速|长)秀霖.*|.*诺和(锐|平|佳|达|灵).*' and rx_desc ~ '(?i).*短效.*|.*生物合成人.*|.*精蛋白锌重组人.*|.*重组人.*|.*谷赖.*|.*赖脯.*|.*门冬.*') and rx_desc !~ '(?i).*注射器针头.*|.*注射针头.*|.*射器.*|.*针头.*|.*泵耗材.*|.*混(和|合).*|.*30R.*|.*50R.*|.*70R.*|.*25.*|.*预混.*|.*70\\/30.*|.*(胰岛素|优泌乐)(30|50).*|.*M30.*|.*诺和灵N.*|.*中效.*|.*甘精胰岛素.*|^(德|徳)谷胰岛素.*' then '餐时胰岛素' when rx_desc ~ '(?i).*(艾塞那|利司那|贝那鲁|利拉鲁|聚乙二醇洛塞那|司美格鲁|度拉糖)肽.*|.*百泌达.*|.*百达扬.*|.*利时敏.*|.*谊生泰.*|.*诺和力.*|.*弗来美.*|.*诺和泰.*|.*度易达.*' then 'GLP-1' when rx_desc ~ '(?i).*德谷门冬双胰岛素.*|.*诺和佳.*' then '双联(Dual)' when rx_desc ~ '(?i).*德谷胰岛素利拉鲁肽.*|.*甘精胰岛素利司那肽.*|.*诺合益.*|.*赛益宁.*' then 'Basal+GLP-1' else null end; UPDATE frequency_dict SET std_frequency = CASE WHEN frequency ~ '(?i).*qd(17|7|11|22|10|2)?.*|.*qn.*' THEN 'qd' WHEN frequency ~ '(?i).*bid.*|.*q12h.*' THEN 'bid' WHEN frequency ~ '(?i).*tid.*|.*q8h.*' THEN 'tid' WHEN frequency ~ '(?i).*qid.*|.*q6h.*' THEN 'qid' WHEN frequency ~ '(?i).*qod.*' THEN 'qod' WHEN frequency ~ '(?i).*qw.*' THEN 'qw' WHEN frequency ~ '(?i).*prn.*' THEN 'prn' WHEN frequency ~ '(?i).*st.*' THEN 'st' WHEN frequency ~ '(?i).*q4h.*' THEN 'q4h' WHEN frequency ~ '(?i).*q3h.*' THEN 'q3h' WHEN frequency ~ '(?i).*q2h.*' THEN 'q2h' WHEN frequency ~ '(?i).*biw.*' THEN 'biw' WHEN frequency ~ '(?i).*q1h.*' THEN 'q1h' WHEN frequency ~ '(?i).*q72h.*' THEN 'q72h' WHEN frequency ~ '(?i).*q7h.*' THEN 'q7h' ELSE NULL END;'''
[diagnosis]
raw_col_name = ['dx_desc']
join_table_name_lst = ['diagnosis']
dict_name_lst = ['dx_desc_dict']
# 直接创建基表;新增标化列;
create_table_sql = 'CREATE TABLE diagnosis as SELECT * FROM diagnosis;ALTER TABLE diagnosis ADD COLUMN std_dx_desc VARCHAR;'
create_dict_table_sql = 'CREATE TABLE dx_desc_dict as SELECT dx_desc,count(1) frequency FROM diagnosis group by dx_desc order by frequency desc;ALTER TABLE dx_desc_dict ADD COLUMN std_dx_desc VARCHAR;'
update_sql = '''UPDATE diagnosis SET std_dx_desc = dx_desc_dict.std_dx_desc FROM dx_desc_dict WHERE diagnosis.dx_desc = dx_desc_dict.dx_desc AND dx_desc_dict.std_dx_desc IS NOT NULL;'''
stand_sql = '''UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '高血压,') WHERE dx_desc ~ '(?i).*高血压.*|.*HBP.*' AND dx_desc !~ '(?i).*妊娠.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '血脂异常,') WHERE dx_desc ~ '(?i).*血脂异常.*|.*(胆固醇|高脂|甘油三?(脂|酯))血症.*|.*高血脂.*|.*高粘血症.*|.*高甘油三酸脂血症.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '背景性视网膜病变,') WHERE dx_desc ~ '(?i).*白内障.*|.*视网膜.*|.*眼.*|.*黄斑.*|.*玻璃体.*|.*玻血.*|.*网脱.*|.*失明.*|.*弱视.*|.*视力.*' AND dx_desc ~ '(?i).*背景性.*|.*非增殖.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '增殖性视网膜病变,') WHERE dx_desc ~ '(?i).*白内障.*|.*视网膜.*|.*眼.*|.*黄斑.*|.*玻璃体.*|.*玻血.*|.*网脱.*|.*失明.*|.*弱视.*|.*视力.*' AND dx_desc ~ '(?i).*增殖性.*' AND dx_desc !~ '(?i).*背景性.*|.*非增殖.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '黄斑水肿,') WHERE dx_desc ~ '(?i).*黄斑水肿.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '重度视觉丧失,') WHERE dx_desc ~ '(?i).*失明.*|.*眼球?萎缩.*|.*眼球?缺失.*|.*盲目(3|三).*|.*(视力|视觉)重度.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '症状性神经病变,') WHERE dx_desc ~ '(?i).*神经.*' AND dx_desc ~ '(?i).*神经炎.*|.*神经痛.*|.*晕.*|.*麻.*|.*乏.*|.*幻.*|.*蚁.*|.*虫.*|.*触电.*|.*肌.*|.*腕管.*|.*植物神经.*|.*神经血管.*|.*直立性低血压.*|.*功能性腹泻.*|.*夏科.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '外周血管病,') WHERE dx_desc ~ '(?i).*(颈|髂总|髂内|肾脏|肢端|腹主|肢|肾小)主?动脉(粥样硬|痉挛|坏疽|硬|瘤|炎|栓塞|血栓)化?.*|.*间歇性?跛行.*|.*红斑性肢痛.*|.*(伯|柏)格.*|.*雷诺氏.*|.*周围血管疾?病.*|.*动脉(肌纤维发育异|坏疽|痉挛).*|.*主动脉(瘤|炎).*|.*主动脉(粥样)?硬化.*|.*(静脉)?曲张.*|.*血栓性静脉.*|.*下肢(深静脉血栓|静脉曲张|动脉闭塞|血栓性静脉炎|静脉功能不全|静脉炎|(血管|动脉)闭塞症|静脉肌间血栓形成).*|.*周围循环.*' AND dx_desc !~ '(?i).*精索静脉曲张.*|.*(颈|主)动脉硬化.*|^静脉曲张(术后|合并溃疡|性皮炎)?$|^糖尿病,肾病综合征$|^(静脉炎和)?血栓性静脉炎$|^冠状动脉痉挛$|^主动脉粥样硬化$|^2型糖尿病,动脉硬化性心脏病,慢性肾病$|^糖尿病,冠状动脉粥样硬化性心脏病,2型糖尿病性肾病,2型糖尿病性周围神经病,$';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '下肢截肢,') WHERE dx_desc ~ '(?i).*截肢.*|.*切断.*|.*截断.*|.*截.*' AND dx_desc ~ '(?i).*腿.*|.*下肢.*|.*足.*' AND dx_desc !~ '(?i).*截瘫.*|.*创伤.*|.*动脉硬化.*|.*骨肿瘤.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '微量白蛋白尿,') WHERE dx_desc ~ '(?i).*微.{0,3}蛋白尿.*|.*蛋白尿.{0,3}微.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '大量白蛋白尿,') WHERE dx_desc ~ '(?i).*大.{0,3}蛋白尿.*|.*蛋白尿.{0,3}大.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '终末期肾病,') WHERE dx_desc ~ '(?i).*肾?移植.*|.*透析?.*|.*尿毒症.*|.*CKD(5|Ⅴ|五).*|.*肾.{0,4}终末.*|.*终末.{0,4}肾.*' AND dx_desc !~ '(?i).*(旁路|心脏|肝脏)移植.*|.*移植物.*|.*胚胎移植.*|.*透(壁|明).*|.*癌.*|.*支架植入.*|.*非透析.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '缺血性心脏病,') WHERE dx_desc ~ '(?i).*冠(心病|状|脉)(动|静|动静)?脉?(瘘|瘤|衰竭|功能不全)?.*|.*动脉(粥样|硬化).{0,3}(心脏病)?.*|.*心?绞痛.*|.*心(肌|脏)(缺|供)血.*|.*缺血性心?(脏|肌|胸痛)(病|症)?.*|.*旁路移植.*|.*搭桥.*|.*多支.*|.*PCI.*|.*(不稳定|渐强|血管痉挛|普林兹迈托氏|变异|痉挛导致|其它形式|劳力|未明确|NOS|心脏)(性|型|的)?心?绞痛.*|.*(中间型冠状动脉综合|狭心|梗塞前综合)症.*|.*Prinzmetal.*|.*心绞痛(综合症|伴有证实痉挛的).*|.*(急|慢)性缺血性心脏病.*|.*缺血性心脏病(急|慢)性.*|.*动脉粥样硬化性(心血管疾|心脏)病.*|.*无痛性心肌缺血.*|.*(心脏|心壁|心室|冠状).{0,1}?动脉瘤.*|.*冠状的?(动|静)脉(栓|闭|血栓栓?)塞?非心肌?梗(塞|死)?导致.*|.*非心肌?梗(塞|死)?导致的冠状的?(动|静)脉(栓|闭|血栓栓?).*' AND dx_desc !~ '(?i).*心肌?梗(死|塞)?.*|.*心痛.*|.*陈旧(性|型)?(心|ST|Q|前|侧|下|高|间|广泛|(左|右)心室).*|.*胸痹.*|.*新生儿短暂性?心肌缺血.*|.*先天性?冠状动脉瘤.*|.*(视网膜|下肢|颈|肾|主|眼底|股|肢体|髂|闭塞性|脑|颈内|锁骨下|椎)动脉.*|^动脉(粥样)?硬化$|.*动脉硬化性脑病.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '心肌梗死,') WHERE dx_desc ~ '(?i).*心梗.*|.*心肌梗死.*|.*心痛.*|.*陈旧(性|型)?(心|ST|非ST|Q|前|侧|下|高|间|广泛|(左|右)心室).*|.*心肌梗塞.*|.*胸痹.*' AND dx_desc !~ '(?i).*陈旧.{0,5}心肌梗死.*|.*心肌梗死恢复期.*|.*亚急性心肌梗死.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '卒中,') WHERE dx_desc ~ '(?i).*(脑.{0,3}(梗|塞|死)|卒中|中风).*' AND dx_desc !~ '(?i).*(脑梗(死|塞|赛)?|中风|卒中|脑栓塞)后遗症.*|.*陈旧.{0,1}(脑(梗|干)(死|塞)?|心肌梗(死|塞)).*|.*脑梗(死|塞).{0,2}陈旧性?.*|.*(脑梗(死|塞)|中风|脑卒中).{0,2}史.*|.*(脑梗死.{0,1}|中风)恢复期.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '心力衰竭,') WHERE dx_desc ~ '(?i).*心.{0,3}衰竭?.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '妊娠糖尿病,') WHERE dx_desc ~ '.*妊娠期.{0,3}糖尿病.*';UPDATE dx_desc_dict SET std_dx_desc = CONCAT(std_dx_desc, '1型糖尿病,') WHERE dx_desc ~ '.*(Ⅰ|I|1|一|胰岛素依赖).?糖尿病.*|.*糖尿病.?(Ⅰ|I|1|一|胰岛素依赖)型.*';UPDATE dx_desc_dict SET std_dx_desc = substr(std_dx_desc, 1, length(std_dx_desc)-1) where std_dx_desc is not null or std_dx_desc != '';'''
[lab_result_cm]
raw_col_name = ['lab_item_name']
join_table_name_lst = ['lab_result_cm']
dict_name_lst = ['lab_item_name_dict']
# 直接创建基表;新增标化列;
create_table_sql = 'CREATE TABLE lab_result_cm as SELECT * FROM lab_result_cm;ALTER TABLE lab_result_cm ADD COLUMN std_lab_item_name VARCHAR;'
create_dict_table_sql = 'CREATE TABLE lab_item_name_dict as SELECT lab_item_name,count(1) frequency FROM lab_result_cm group by lab_item_name order by frequency desc;ALTER TABLE lab_item_name_dict ADD COLUMN std_lab_item_name VARCHAR;'
update_sql = '''UPDATE lab_result_cm SET std_lab_item_name = lab_item_name_dict.std_lab_item_name FROM lab_item_name_dict WHERE lab_result_cm.lab_item_name = lab_item_name_dict.lab_item_name AND lab_item_name_dict.std_lab_item_name IS NOT NULL;'''
stand_sql = '''UPDATE lab_item_name_dict SET std_lab_item_name = CASE WHEN lab_item_name ~ '(?i).*空腹.*|.*FPG.*|.*空腹血糖.*' AND lab_item_name ~ '(?i).*血.*' THEN 'FPG' WHEN lab_item_name ~ '(?i).*HbA1c.*|.*糖化血红蛋白.*' THEN 'HbA1c' WHEN lab_item_name ~ '(?i).*OGTT.*|.*耐糖.*|.*负荷.*' AND lab_item_name ~ '(?i).*2.{0,3}小时.*|.*120{0,3}分钟.*' THEN '葡萄糖负荷2小时血糖' WHEN lab_item_name ~ '(?i).*C肽.*|.*C-PR.*' AND lab_item_name ~ '(?i).*空腹.*' THEN '空腹C肽' WHEN lab_item_name ~ '(?i).*C肽.*|.*C-PR.*' AND lab_item_name ~ '(?i).*1.{0,3}小时.*|.*60{0,3}分钟.*' THEN '餐后1小时C肽' WHEN lab_item_name ~ '(?i).*C肽.*|.*C-PR.*' AND lab_item_name ~ '(?i).*2.{0,3}小时.*|.*120{0,3}分钟.*' THEN '餐后2小时C肽' WHEN lab_item_name ~ '(?i).*C肽.*|.*C-PR.*' AND lab_item_name ~ '(?i).*3.{0,3}小时.*|.*180{0,3}分钟.*' THEN '餐后3小时C肽' ELSE NULL END;'''
version: '3.8'
x-common-settings: &common-settings
image: palan-app:1.0.0
volumes:
- "./:/opt/in2-t2dm"
services:
etl:
<<: *common-settings
command: /bin/bash -c "cd /opt/in2-t2dm/codes/bash/ && sh etl.sh"
transform:
<<: *common-settings
command: /bin/bash -c "cd /opt/in2-t2dm/codes/bash/ && sh transform.sh"
preprocess:
<<: *common-settings
command: /bin/bash -c "cd /opt/in2-t2dm/codes/bash/ && sh preprocess.sh"
shiny:
<<: *common-settings
container_name: in2-shinyserver
restart: always
ports:
- "3837:3838"
command: /bin/bash -c "cd /opt/in2-t2dm/codes/bash/ && sh shiny.sh"
import pandas as pd
import os
def deduplicate_csv_files():
# 获取当前目录下所有的.csv文件
csv_files = [f for f in os.listdir('.') if f.endswith('.csv')]
for file in csv_files:
# 读取CSV文件
df = pd.read_csv(file, low_memory=False)
# 去重
df_deduplicated = df.drop_duplicates()
print(f"原始数据行数({file}): {len(df)}。去重后数据行数({file}): {len(df_deduplicated)}")
# 覆盖原始文件
df_deduplicated.to_csv(file, encoding='utf-8-sig', index=False)
print(f"去重后的数据已覆盖原文件:{file}")
# 调用函数
deduplicate_csv_files()
import duckdb
import pyarrow as pa
import pyarrow.flight as fl
import logging
import os
# 设置日志格式和级别,方便记录服务器运行中的相关信息
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
class DuckDBFlightServer(fl.FlightServerBase):
def __init__(self, db_path):
# 使用 "grpc://0.0.0.0:8815" 作为 location 的格式,监听所有可用网络接口的8815端口
super().__init__(location="grpc://0.0.0.0:8815")
try:
self.connection = duckdb.connect(db_path)
logging.info(f"Successfully connected to DuckDB database at {db_path}")
except duckdb.Error as e:
logging.error(f"Failed to connect to DuckDB database: {e}")
raise # 重新抛出异常,避免程序继续执行出现问题
def do_get(self, context, ticket):
try:
query = ticket.ticket.decode('utf-8')
table = self.connection.execute(query).fetchdf()
# 将 DataFrame 转换为 Arrow Table
arrow_table = pa.Table.from_pandas(table)
# 使用 record_batches 方法返回结果
return pa.flight.RecordBatchStream(arrow_table)
except duckdb.Error as e:
logging.error(f"Error executing query in do_get: {e}")
context.set_error(fl.FlightError(
code=fl.FlightStatusCode.INTERNAL_SERVER_ERROR,
message="Database query error"
))
return None
except UnicodeDecodeError as e:
logging.error(f"Error decoding ticket in do_get: {e}")
context.set_error(fl.FlightError(
code=fl.FlightStatusCode.BAD_REQUEST,
message="Invalid ticket encoding"
))
return None
def do_put(self, context, descriptor, reader):
try:
# 这里简单示例将传入的数据解析后插入到名为 'your_table_name' 的表中,假设表结构和数据格式匹配
# 实际应用中需要根据具体业务需求和数据格式来准确处理写入逻辑
table_name = descriptor.path[0].decode('utf-8') # 获取表名(假设表名放在 descriptor.path 中,需按实际情况调整)
arrow_table = reader.read_all() # 读取所有传入的数据记录批次
df = arrow_table.to_pandas() # 转换为 pandas DataFrame
self.connection.execute(f"INSERT INTO {table_name} SELECT * FROM df")
logging.info(f"Successfully inserted data into table {table_name}")
context.set_success()
except duckdb.Error as e:
logging.error(f"Error executing put operation: {e}")
context.set_error(fl.FlightError(
code=fl.FlightStatusCode.INTERNAL_SERVER_ERROR,
message="Error during data insertion"
))
except UnicodeDecodeError as e:
logging.error(f"Error decoding table name: {e}")
context.set_error(fl.FlightError(
code=fl.FlightStatusCode.BAD_REQUEST,
message="Invalid table name encoding"
))
def main():
db_path = os.path.join(os.path.dirname(__file__), 'exported.duckdb')
server = DuckDBFlightServer(db_path)
print("Starting Flight Server on port 8815...")
try:
server.serve()
except Exception as e:
logging.error(f"Error starting the Flight Server: {e}")
if __name__ == "__main__":
main()
\ No newline at end of file
from flightsql import FlightSQLClient
import socket
import pandas as pd # 确保导入 pandas
from flightsql import __version__ as flightsql_version
print(f"flightsql 版本:{flightsql_version}")
def test_connection(host, port):
try:
with socket.create_connection((host, port), timeout=5):
print(f"成功连接到 {host}:{port}")
except OSError as e:
print(f"无法连接到 {host}:{port},错误: {e}")
# 创建 FlightSQLClient 实例
client = FlightSQLClient(host='192.168.101.45', port=50802, insecure=True, disable_server_verification=True, token=True)
# 测试连接
test_connection('192.168.101.45', 50802)
# 执行 SQL 查询并获取结果信息
info = client.execute("SELECT * FROM iceberg.cdm.outpatient_record LIMIT 10000000")
# 初始化一个空的列表来存储数据框
data_frames = []
# 遍历所有端点,获取数据并转换为 DataFrame
for endpoint in info.endpoints:
reader = client.do_get(endpoint.ticket)
data_frame = reader.read_all().to_pandas()
data_frames.append(data_frame)
# 合并所有数据框
final_data_frame = pd.concat(data_frames, ignore_index=True)
# 输出最终的数据
print(final_data_frame)
import pandas as pd
import os
def deduplicate_csv_files():
# 获取当前目录下所有的.csv文件
csv_files = [f for f in os.listdir('.') if f.endswith('.csv')]
for file in csv_files:
# 读取CSV文件
df = pd.read_csv(file)
# 去重
df_deduplicated = df.drop_duplicates()
print(f"原始数据行数({file}): {len(df)}。去重后数据行数({file}): {len(df_deduplicated)}")
# 输出到新文件
new_filename = f"{os.path.splitext(file)[0]}_unique.csv"
df_deduplicated.to_csv(new_filename, index=False)
print(f"去重后的数据已保存到:{new_filename}")
# 调用函数
deduplicate_csv_files()
from data_query import *
queries = """
select DISTINCT numerical_value,normal_low,normal_high ,count(*) from iceberg.cdm.lab_report_result
where test_item_name = 'BP' group by test_item_name,numerical_value,normal_low,normal_high
"""
"""
queries =
select DISTINCT test_item_name,count(*) from iceberg.cdm.lab_report_result
where test_item_name ~* '血压|舒张压|收缩压|BP|SBP|DBP' group by test_item_name
"""
# 调用函数
a = execute_query(queries)
print(a)
\ No newline at end of file
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论