Hexo修改永久链接格式并向后兼容
最近一直想简化每篇博客的永久链接,现在的永久链接大概长这样:
/2021/03/21/migrateopsmanager.en/
希望能把中间的日期去掉,变成这样:
/migrateopsmanager.en/
原因在于短链接更作为标识更为合理,而URL上带有日期字符串并没有实际的用处,除非重名的博客太多。但这不可能发生,因为所有博客都写在source/_post
文件夹,博客重名会引起文件名冲突。
最近一直想简化每篇博客的永久链接,现在的永久链接大概长这样:
/2021/03/21/migrateopsmanager.en/
希望能把中间的日期去掉,变成这样:
/migrateopsmanager.en/
原因在于短链接更作为标识更为合理,而URL上带有日期字符串并没有实际的用处,除非重名的博客太多。但这不可能发生,因为所有博客都写在source/_post
文件夹,博客重名会引起文件名冲突。
生产环境有个SQL Server云数据库,花钱不少,性能却很差。最近发现跑一些并不太复杂的存储过程却需要好几分钟。其中一个常用的存储过程是将几张表通过一个键值关联并返回结果,这几张表都在千万级大小。分析了下执行计划,发现index seek花去了90%以上的时间,进而发现这些表的索引碎片化都极为严重。由于这个老DB几经易手,没人知道这些表和存储过程是做什么的,于是需要手动分析这些表的schema和存储过程所依赖的表。基于这些结果再对索引重建提升性能。下面就是用来分析DB的一些重要Query。
We have a costly SQL server database with bad performance. Specifically, some store procedures (join several tables on primary key, each table has ~10M rows) were executed for several miniutes. The execution plan showed that the index seek costs 90% of the total time. Finally we found the root cause is the indexes have very high degree of fragmentation. Since its DBA had changed many times, we need to analyze the database schemas, table disk usage and storage procedure dependency tables. Based on these results, we cleanup tables, store procedures and rebuild the indexes to improve the DB performance. Here are the queries to accomplish these tasks.
Recently we want to deploy MongoDB Ops Manager and MongoDB deployments in different data centers to improve disaster recovery. If they are deploymented in the same data center and unfortunately it fails, you cannot restore the backup data to a new cluster as both Ops Manager and deployments are unavailable.
Of course, we don't want to re-deploy the existing MongoDB deployments in Kubernetes. But how to make the deployments sending data to the new ops manager URL?
.NET中使用MongoDB非常简单,一般来说可以直接使用BsonDocument,也可以使用定义好的数据类型对文档进行CRUD操作。本文通过实例对比一下两种方式的优劣,通常,通过强类型Collection对文档进行操作更为便捷。
Using MongoDB in .NET is easy. However, there are two ways to manipulate the documents in C# code: raw bson document or stronly-typed document. In this article, we will compare the difference of them by examples. Basically, strongly-typed collection is preferred unless you have strong reason to use weakly-typed document (different types in the same collection?).
MongoDB事务是个很好的功能,但对于高并发场景下的多文档事务,写冲突难以避免。一个写冲突的实例:
Exception: Command update failed: Encountered error from mongodb.svc.cluster.local:27017 during a transaction :: caused by :: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction..
那么如何正确实现事务的重试?
MongoDB transaction is a nice feature. Although MongoDB uses optimistic concurrency control, write conflict is unavoidable. The situation becomes worse in multi-document transaction which modifies many documents in one transaction. If a write conflict happens, a MongoDBCommandException will be thrown:
Exception: Command update failed: Encountered error from mongodb.svc.cluster.local:27017 during a transaction :: caused by :: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction..
How to handle the writeconflict error in MongoDB?
Today I try to import an existing MongoDB deployment (out of the kubernetes cluster) into MongoDB Ops Manager which is running in kubernetes. After installing MongoDB Agent to the deployment, only automation functionality works while monitoring and backup not work. The root cause is that the agent still try to post data into Ops Manager's internal endpoint.
I have a sharded cluster (2 shards, each 3 mongods; 3 config server, 2 mongoses) which is deployed by MongoDB Ops Manager.
Last week, one of the shard host status was shown as a grey diamond
(Hover: "Last Ping: Never"). Besides, in the Ops Manager's server page,
a server had two processes (e.g. sharddb-0
and
sharddb-config
). However, the cluster still works well and
we can list the host sharddb-0-0
(shard 0, replica 0) in the
mongo shell by sh.status()
and rs.status()
.
What's wrong with the cluster?