Productivity improvement
Number of clusters
As the world's leading online brokerage, Tiger Brokers has always regarded technology R&D and innovation as its core competitiveness, and continues to empower global financial services with technology. With the rapid coverage of the international market in the business territory, the diversified compliance pressure has put forward higher requirements for the R&D speed of Tiger Brokers. Tiger Brokers has launched DataStone, a self-developed engineering platform, which improves R&D efficiency and reduces O&M burden by integrating various technical tools, integrating cloud computing resources, and unifying usage standards. With the rapid growth of the use of engineering platforms, how DataStone manages its data infrastructure has become a new challenge, mainly in the following aspects:
The Difficulties of Agility in Global Compliance The business covers multiple markets and needs to dynamically adapt to differentiated financial policies and compliance standards. The traditional data engine (database and middleware) construction and management mode is difficult to efficiently realize the rapid construction, destruction and reconstruction of cross-regional business systems, and the complex and changeable development process restricts the efficiency of products going overseas.
Fragmentation of management and control caused by the dispersion of technology stacks In the process of comprehensive business containerization, diversified data engines are scattered and deployed, lacking unified management capabilities, and the silo management model is not suitable for the flexible, agile and standardized architecture characteristics of the platform engineering system. Problems such as inconsistent O&M standards, fragmented monitoring systems, and inefficient emergency response also restrict the system collaboration efficiency of platform-based projects, and it is urgent to build unified and standardized data engine management capabilities to improve the O&M collaboration and process standardization capabilities of engineering platforms in the field of data engine management.
Responsibility Gaps in Cloud-Native Transformation The proliferation of container technology has blurred the boundaries of data engine management. Traditional O&M teams lack containerized management and control capabilities, and development teams struggle to balance application iteration and stateful service governance. The overlapping responsibilities lead to the blind spot of operation and maintenance, and it is urgent to establish a new management model of collaboration and autonomy based on the platform-based engineering system.
Against this backdrop, Tiger Brokers' platform engineering team rose to the challenge and launched the data infrastructure transformation work, hoping to further improve DataStone's service capabilities and provide stronger support for its globalization strategy.
Tiger Brokers takes DataStone, an engineering platform, as the core carrier and KubeBlocks Enterprise as the foundation of a new generation of data infrastructure, realizes the automatic integration of the application development process and the full lifecycle management of the data engine, promotes the full-link efficiency improvement of business system development, testing, delivery, and data preparation, and provides services to meet the regulatory requirements of the complex global financial marketPlatform-level assurance.
Developers trigger CI/CD pipelines after submitting YAML files containing declarative configurations of databases and middleware (such as resource specifications and regional compliance data) in the code repository.
The DataStone management module parses the configuration of the database or middleware in the code repository and makes a standard API call to KubeBlocks, and executes resource creation (automatic selection of resource configuration and network isolation scheme according to policy, etc.), configuration injection, and dependency initialization (account permissions and data schema deployment).
In the business application container construction stage, the database cluster is synchronously pulled up and network connectivity is established.
When the pipeline triggers the environment destruction command, the platform initiates a backup task, generates a snapshot, persists it to OBS, and then reclaims computing resources.
Through the collaborative model of "DevOps + Database as Code", the lifecycle management of databases and middleware is incorporated into the CI/CD pipeline, realizing one-stop automation of self-service application, release and destruction of database and middleware resources, solving the problem of separation between database changes and business release in the financial industry, and providing more agile solutions to meet the financial regulatory requirements of different countries.
Through the standardized Operator API of KubeBlocks Enterprise, the engineering platform realizes the consistent and standardized management of different data engines in cloud-native scenarios, abandoning the silo construction model of one set of data engines and one management tool. By building a centralized O&M hub, an integrated management and control plane covering operation management, monitoring and early warning, and log management is formed, completely eliminating the tool fragmentation and O&M complexity caused by traditional silo management, thereby improving the O&M collaboration and process standardization capabilities of the engineering platform in the field of data engine management. At the same time, the unified management of the underlying resources is realized through resource pooling, which solves the problem of resource fragmentation and significantly improves the resource utilization.
At the same time, Tiger Brokers' technical team found that there was an upper limit on cloud disk mounts on server nodes, which could not achieve ultra-high-density replica deployment. Therefore, the disk allocation scheme of the data engine is redesigned: different components use different disk schemes, MySQL databases use native cloud disks, and the cloud disks used by other data engines are managed through Ape-local, which solves the problem of limiting the number of cloud disks mounted on public cloud nodes. At the same time, Ape-local supports the over-allocation and hard limit mechanism of storage volume capacity and throughput, and better handles the problems of total data overflow and IO resource contention under the premise of making full use of storage capacity.
Tiger Brokers relies on KubeBlocks Enterprise to build a three-in-one data engine management architecture of the engineering platform of "self-service plane + expert operation and maintenance experience + enterprise-level management and control base", which solves the management boundary of containerized data engine Ambiguous problem: The platform provides a declarative self-service interface (web console/API) for development teams, and supports one-click application for database instances with preset compliance policies, so that developers can obtain production-ready data engine resources in minutes without containerization expertise, while strictly restricting operation boundaries. KubeBlocks Enterprise solidifies performance tuning parameters and security baselines into reusable production-level running configuration templates. Relying on the management and control base of KubeBlocks Enterprise's enterprise-level data engine to track the status of global resources in real time, and ensuring SLAs through an automated fault handling engine (such as automatic instance migration or active/standby switchover due to node failures) and compliance check workflows (automatic backup), a closed-loop of collaborative governance is finally formed in which developers operate autonomously in the security sandbox, KubeBlocks Enterprise focuses on high-value configuration optimization, and automated compliance O&M.
技术能力提升:工程平台通过 KubeBlocks Enterprise 提供的统一的 Operator、InstanceSet 、Ape-local 等技术给数据引擎带来了新的管理体验;不同数据引擎的生产级高可用架构保证了运行的稳定性,灵活的数据备份与恢复保证了数据的可靠性;精细化的数据引擎引擎的运维监控能力保障了运维的高效性;
工作效率提升:统一的管理逻辑、自助式的服务模式、高可靠的数据引擎配置赋能下的工程平台,极大的解放了运维压力,提高了管理的便捷性和研发服务响应效率,数据库的创建时间、副本的创建时间缩短 90%,测试环境一键拉起的时间缩短由若干天缩短到一个小时,问题定位时间缩短到分钟级;
业务价值提升:CI/CD 自动化流程的打通使得业务应用的开发、测试、发布速度得到明显提升;构建了丰富的数据引擎引擎资源池,方便在进行业务创新时快速选择合适的数据引擎引擎,结合 CI/CD 进行业务快速验证与快速迭代。业务响应速度显著提升。