MongoDB way to explore https:https://cdn.v2ex.com/navatar/9dfc/d5e5/311_normal.png?m=1646753668 https:https://cdn.v2ex.com/navatar/9dfc/d5e5/311_large.png?m=1646753668 2025-05-28T06:26:49Z Copyright © 2010-2018, V2EX 大佬们,急需请教,遇到大困难 tag:www.v2ex.com,2025-05-28:/t/1134875 2025-05-28T06:11:29Z 2025-05-28T06:26:49Z iamtuzi3333 member/iamtuzi3333 有啥好用的 mongodb GUI 工具 tag:www.v2ex.com,2024-10-10:/t/1078949 2024-10-10T08:56:19Z 2024-11-25T10:37:33Z dropdatabase member/dropdatabase 大佬们,大问题请教,内存占满 tag:www.v2ex.com,2024-09-19:/t/1074136 2024-09-19T09:45:28Z 2024-10-05T03:54:10Z iamtuzi3333 member/iamtuzi3333 mongodump 出来的文件为什么比数据库本身大了一倍多?数据库大小只有 100GB, dump 出来的 bson 有 242GB,硬盘放不下了,怎么不占用硬盘空间备份数据到其它机器? tag:www.v2ex.com,2024-06-21:/t/1051501 2024-06-21T07:12:36Z 2024-06-21T14:01:30Z drymonfidelia member/drymonfidelia 想请教一下, mongoDB 是否适合海量数据存储 tag:www.v2ex.com,2024-05-14:/t/1040767 2024-05-14T14:28:54Z 2024-05-31T00:17:29Z iamtuzi3333 member/iamtuzi3333 MongoDB 如何优化到跟 MySQL 资源占用一样低? tag:www.v2ex.com,2024-05-06:/t/1037908 2024-05-06T00:14:21Z 2024-05-06T03:04:17Z gosky member/gosky 主要是指减少内存占用
写少读多,程序层可以另外加缓存,接受性能有所下降 ]]>
新人小白求教 tag:www.v2ex.com,2024-04-17:/t/1033168 2024-04-17T02:24:18Z 2024-04-17T06:00:55Z SevenNight2020 member/SevenNight2020 mongodb 新手虚心请教大佬们 tag:www.v2ex.com,2024-03-15:/t/1024056 2024-03-15T09:00:47Z 2024-03-15T11:37:10Z calmlyman member/calmlyman 问题场景: 服务端用 nodejs+mongoose+node-schedule 跑了些定时任务,执行 bulkWrite 操作,代码大概如下:

updateOne: { filter: {id: id}, update: {$set: item}, upsert: false } 

在某时刻跑太多任务时,mongodb 偶发性会挂掉,出现以下错误:

MongoBulkWriteError: connection 20 to 127.0.0.1:27017 closed 

这时我只能临时执行一下systemctl restart mongod命令重启一下就好了,请问大佬们,这是不是写入的数据太多了超过限制呢?是什么原因呢?该如何优化?谢谢!

]]>
请教大家一个 mongodb 大数据量下, count 优化的问题 tag:www.v2ex.com,2024-03-13:/t/1023136 2024-03-13T02:41:30Z 2024-03-13T04:04:18Z Belmode member/Belmode 现在有个一个集合,每月大概会往里插入将近 600M 的数据,项目项目上线了将近 4 个月,有 2500W 数据,现在 count 一次,需要将近 1~2 分钟,即使使用了各类索引优化,还是没什么太大作用。

我想请教一下大家,遇到这中情况,该如何优化呢,或者说,有什么更合适的方案去在这种数据量下进行 count 查询呢?

谢谢! ]]>
MongoDB 的 findOneAndUpdate 并发问题 tag:www.v2ex.com,2024-03-12:/t/1022890 2024-03-12T07:14:56Z 2024-03-12T09:38:43Z Grand member/Grand {
"_id": ObjectId // 主键
"status": String
}
数据库里面有一条数据
{
"_id": 1,
"status": "waiting"
}
现在两个线程 A,B ,线程 A 和 B 并发执行 findOneAndUpdate({"_id": "1", "status": "waiting"}, {$set: { "status": "running"}})
虽然 findOneAndUpdate 是原子操作,但是有没有可能同时查找出该文档。 ]]>
求助大佬们: Mongo DB Compass 加 socks5 代理连不上 MongoDB Atlas 上的库 tag:www.v2ex.com,2024-02-16:/t/1015925 2024-02-16T16:12:13Z 2024-02-16T22:21:06Z hlwjia member/hlwjia
想不明白问题在哪

v2 - ray - x 客户端,代理也没问题,一直在用。

库和链接的 credentials 都没问题,因为我切换到手机的 HK 流量,不走代理,就蹭蹭连上了。 ]]>
关于远程 mongodb 数据同步到本地 Elasticsearch 问题 tag:www.v2ex.com,2024-01-28:/t/1012259 2024-01-28T14:49:47Z 2024-01-28T19:46:47Z nleg member/nleg
我想问一下大佬们,有没有什么方案能够每次开机都同步远程 mongodb 数据库到本地的 ES 中。。

这样我就能实现在本地搜索了。 ]]>
mongodb 日志文件巨大,怎么读取,以及怎么限制其大小? tag:www.v2ex.com,2023-11-29:/t/996281 2023-11-29T09:02:30Z 2023-11-29T09:12:42Z manasheep member/manasheep 目前服务器是 4.0 的,部署在 win server ,遇到的问题是近期偶尔会莫名其妙的 mongodb 服务停止,就想看一下日志,但是发现日志有 55GB ,VSCode 打开说是二进制文件,那么用什么工具能比较好的读取该日志呢?
另外又怎么限制日志文件的大小呢?(最好能按日期分别写在单独文件中,能实现吗?) ]]>
江湖救急,冷备数据恢复 tag:www.v2ex.com,2023-11-28:/t/995990 2023-11-28T09:23:49Z 2023-11-28T06:23:49Z sadfQED2 member/sadfQED2 有一份 4 年前的 Mongodb 冷备数据,现在需要恢复出来使用。4 年前留下的是一个 tar 格式的文件,19G 大小,预览文件内容是这样的:

mongodb/ mongodb/WiredTiger.lock mongodb/index-3--2162478702715552952.wt mongodb/diagnostic.data/ mongodb/diagnostic.data/metrics.2019-11-12T20-37-31Z-00000 mongodb/diagnostic.data/metrics.2019-12-03T12-08-23Z-00000 mongodb/diagnostic.data/metrics.2019-11-26T13-42-24Z-00000 mongodb/diagnostic.data/metrics.2019-12-05T17-38-23Z-00000 mongodb/diagnostic.data/metrics.2019-11-15T02-22-31Z-00000 mongodb/diagnostic.data/metrics.2019-12-01T03-58-23Z-00000 mongodb/diagnostic.data/metrics.2019-11-04T12-04-09Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-32-45Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-22-27Z-00000 mongodb/diagnostic.data/metrics.2019-11-20T17-17-16Z-00000 mongodb/diagnostic.data/metrics.2019-11-08T14-42-31Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-27-58Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-33-52Z-00000 mongodb/diagnostic.data/metrics.2019-11-10T20-37-31Z-00000 mongodb/diagnostic.data/metrics.2019-11-23T05-48-39Z-00000 mongodb/diagnostic.data/metrics.2019-11-22T23-37-16Z-00000 mongodb/diagnostic.data/metrics.2019-11-04T10-51-02Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-37-15Z-00000 mongodb/diagnostic.data/metrics.2019-11-06T09-32-31Z-00000 mongodb/diagnostic.data/metrics.2019-12-10T06-10-22Z-00000 mongodb/diagnostic.data/metrics.2019-11-18T12-06-17Z-00000 mongodb/diagnostic.data/metrics.2019-12-07T23-00-22Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-24-19Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-26-23Z-00000 mongodb/diagnostic.data/metrics.2019-11-28T20-58-23Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T10-08-45Z-00000 mongodb/diagnostic.data/metrics.2019-11-03T09-39-15Z-00000 mongodb/_mdb_catalog.wt mongodb/collection-0--2162478702715552952.wt mongodb/collection-4--6287796740362363623.wt mongodb/index-5--2162478702715552952.wt mongodb/journal/ mongodb/journal/WiredTigerPreplog.0000000001 mongodb/journal/WiredTigerPreplog.0000000002 mongodb/journal/WiredTigerLog.0000000434 mongodb/WiredTiger.wt mongodb/collection-2--4216008088303394775.wt mongodb/WiredTigerLAS.wt mongodb/WiredTiger.turtle mongodb/index-6--6287796740362363623.wt mongodb/index-4--2162478702715552952.wt mongodb/collection-2--6287796740362363623.wt mongodb/index-3--6287796740362363623.wt mongodb/index-1--2162478702715552952.wt mongodb/index-1--4216008088303394775.wt mongodb/collection-0--4216008088303394775.wt mongodb/mongod.lock mongodb/WiredTiger mongodb/collection-0--6287796740362363623.wt mongodb/index-4--4216008088303394775.wt mongodb/storage.bson mongodb/sizeStorer.wt mongodb/index-2--2162478702715552952.wt mongodb/index-3--4216008088303394775.wt mongodb/index-1--6287796740362363623.wt mongodb/index-5--6287796740362363623.wt 

有没有熟悉 mogodb 的大佬看一眼,这究竟是啥备份数据呀,怎么恢复呀,我用 MongoDB 自带的导入命令尝试,始终说格式不对。我从最新的 7.0.4 版本,一路试到 3.0 版本,全部说格式不对。难不成这个是什么第三方工具导出的数据?

]]>
感觉 Mongodb 的权限管理有点复杂 tag:www.v2ex.com,2023-11-24:/t/994961 2023-11-24T13:32:26Z 2023-11-24T14:15:38Z Inzufu member/Inzufu Mongodb.com
上面的免费数据库,体验其实还可以,最近想在自己的 VPS 上面私有化部署一个 Mongodb ,感觉在权限管理上面很混乱,并且网上也不太好搜相关的教程。
就比如默认的 test 数据库,用 db.dropDatabase() 删了半天一直都删不掉,并且在开了 authentication 的情况下依旧可以无密码直接连接,并且还可以用任何存在的用户名+空密码登录进用户(虽然访问提示没有权限,但感觉还是不太踏实)。
不知道 Mongodb 就是这样设计的还是我配置的问题,请高人指点。 ]]> MongoDB 有什么比较新的入门教程吗? tag:www.v2ex.com,2023-10-27:/t/986146 2023-10-27T13:58:23Z 2024-01-03T23:25:42Z Dingzhen member/Dingzhen 现在的视频都是 4.0 或 5.0 的,好像过时了。现在有结合 Java 使用的最新教程或视频吗?

]]>
有没有觉得用 typescript 写 mongoose 好蛋疼 tag:www.v2ex.com,2023-09-28:/t/977999 2023-09-28T08:46:14Z 2023-10-04T05:35:45Z amlee member/amlee 感觉 mongoose 的类型系统好混乱,怎么用都不对劲。

就光一个 ObjectId ,我始终搞不清 mongoose.Types.ObjectId mongoose.Schema.Types.ObjectId有什么区别。

然后文档里面还有一个 mongoose.ObjectId,这玩意 ide 直接提示类型错误了,好蛋疼。

]]>
mongodb 生产环境创建索引 tag:www.v2ex.com,2023-09-23:/t/976479 2023-09-23T08:02:47Z 2023-09-23T09:10:42Z whyalsme member/whyalsme mongodb v4.0

生产环境,需要添加一个索引,有没有不影响线上业务的方案,或者有什么方案和经验可以分享下,把影响降到最低。

]]>
刚用 mongodb,请教下类似 MySQL 的「select ... for update」大伙都是如何替代? tag:www.v2ex.com,2023-07-23:/t/959031 2023-07-23T12:52:16Z 2023-07-23T17:27:00Z Haujilo member/Haujilo 最近开始用 mongodb ,发现也有事务的概念了。

不过有些业务代码有在多线程下访问 MySQL 的场景时,更新数据会用 select ... for update 锁行,比如常见的锁个订单,然后调用外部接口后修改状态,失败的时候回滚数据,此时其他的线程访问都是阻塞住的。

用 MongoDB 的话,是否原生指令就可以实现同样的效果(好像没搜到)?是必须得额外搞个分布式锁来?

]]>
求一个好用的 MongoDB 管理工具~ tag:www.v2ex.com,2023-07-19:/t/957979 2023-07-19T06:29:31Z 2023-07-26T13:42:57Z manasheep member/manasheep mongodb 只剩硬盘上的数据库文件要怎么恢复 tag:www.v2ex.com,2023-06-29:/t/952665 2023-06-29T04:50:36Z 2023-06-29T04:49:36Z slcun member/slcun centos 系统坏了,直接换了硬盘重装的系统,旧硬盘上有 mongodb 的数据库文件,这个要怎么恢复?直接复制粘贴到新系统上可以么?

]]>
用不存在的字段的子字段 lookup 时,分组后该字段值会变成空对象,如何变成 null tag:www.v2ex.com,2023-02-17:/t/917036 2023-02-17T12:40:24Z 2023-02-17T12:39:24Z imldy member/imldy 场景是为评论寻找子评论 评论文档

{ "_id": { "$oid": "63ed9bd52b031a24fdbe1e1e" } "creatorId": { "$oid": "63e51a155ca7f018d6038967" }, "text": "评论内容 0216" } 

子评论文档:

{ "_id": { "$oid": "63ee03eb98b24f5603c044da" }, "linkCommentId": { "$oid": "63ed9bd52b031a24fdbe1e1e" }, "replyToUserId": { "$oid": "63e51a155ca7f018d6038967" }, "creatorId": { "$oid": "63e51a155ca7f018d6038967" }, "text": "测试子评论 0216-2" } 

代码

[ { $match: { _id: ObjectId("63ed9bd52b031a24fdbe1e1e"), }, }, { // 从子评论集合中找到评论的子评论(假设该评论不存在子评论,replies 为空数组) $lookup: { from: "dynamicChildComment", localField: "_id", foreignField: "linkCommentId", as: "replies", }, }, { // 展开子评论(得到一条没有 replies 的文档) $unwind: { path: "$replies", preserveNullAndEmptyArrays: true, }, }, { // 1 、给子评论寻找发布者,执行后:replies 对象只存在一个属性 creator ,值为空数组 $lookup: { from: "users", localField: "replies.creatorId", foreignField: "_id", as: "replies.creator", }, }, { // 2 、执行后:replies 对象没有属性 $unwind: { path: "$replies.creator", preserveNullAndEmptyArrays: true, }, }, { // 3 、执行后:replies 对象只存在一个属性 replyToUser ,值为空数组 $lookup: { from: "users", localField: "replies.replyToUserId", foreignField: "_id", as: "replies.replyToUser", }, }, { // 4 、执行后:replies 对象没有属性 $unwind: { path: "$replies.replyToUser", preserveNullAndEmptyArrays: true, }, }, { $group: { _id: "_id", replies: { $push: "$replies", // 到这里,就出现了一个 replies 数组,有一个空对象作为第 0 个元素 }, }, }, ] 

我该如何将 replies 数组变成空数组

]]>
各位 javaer MongoDB 用的什么 orm tag:www.v2ex.com,2023-02-01:/t/912244 2023-02-01T06:31:04Z 2023-02-01T10:27:25Z nekomiao member/nekomiao spring 集成方便,支持 mybatis plus 那种 Lambda 表达式

]]>
mongodb lookup 使用请教 tag:www.v2ex.com,2023-01-13:/t/908635 2023-01-13T03:02:51Z 2023-01-13T05:10:27Z slomo member/slomo 请教如何 join data

问题描述

现在有两个 document, order 和 product. 数据分别如下

// order { "id": 1, "name": "我的订单", "products": [ { "productId": 1, "num": 2 }, { "productId": 2, "num": 1 } ] } // product { "id": 1, "name": "测试商品", "price": 10.0 } { "id": 2, "name": "正式商品", "price": 18.8 } 

如果我想得到下面这样的数据结构, 应该怎么写查询呢

{ "_id": 1, "name": "我的订单", "products": [ { "productId": 1, "num": 2, "product": { "id": 1, "name": "测试商品", "price": 10.0 } }, { "productId": 2, "num": 1, "product": { "id": 2, "name": "正式商品", "price": 18.8 } } ] } 
]]>
monodb 内存占满,进程退出 tag:www.v2ex.com,2022-12-10:/t/901522 2022-12-10T06:06:36Z 2022-12-16T21:40:59Z among member/among 最近一段时间,一台 mongodb 的服务器,版本为 4.4.10 ,经常遇到内存占满退出,现在的 linux 版本为 centos 7.9 ,内存为 36g ,经常遇到内存占满后闪退的问题。

已经配置了 wiredTigercachesizeGB 的参数

启动参数如下:

mongod --bind_ip-a11 --auth --journal --oplogsize 8192 --wiredTigercachesizeGB 28 --1ogpath logs/mongod.log --1ogappend --dbpath data --directoryperdb

操作系统的日志为:

image.png

]]>
12 月 14 日上海新天地朗廷酒店,我司赞助的 MongoDB Day 上海站,欢迎大家联系我注册报名。 tag:www.v2ex.com,2022-12-07:/t/900866 2022-12-07T11:10:34Z 2022-12-07T14:55:56Z don1731 member/don1731
  • 会议时间:上午九点到下午五点
  • 主题演讲:MongoDB 北亚区高级副总裁,MongoDB 北亚区技术总监
  • 客户分享:网易游戏,咪咕视讯,腾讯游戏,南瓜电影
  • 抽奖奖品:Airpods Pro ,雷蛇机械键盘,华为充电宝
  • 更多介绍信息,联系微信:MTUzNTM3MzcwNDI=
  • ]]>
    mongoose 外键查询问题 tag:www.v2ex.com,2022-10-11:/t/886074 2022-10-11T06:40:42Z 2022-10-11T10:45:19Z Chan66 member/Chan66 有两个表,一个用户表,一个订单表,现在想要查询所有 appId 为 1 的用户的订单应该怎么查询呢

    const orderSchema = new mongoose.Schema({ user: { type: mongoose.SchemaTypes.ObjectId, ref: "User", required: true }, orderNum: { type: String, required: true }, }) 
    const UserSchema = new mongoose.Schema({ nickname: String, avatarUrl: String, phone: String, appId: { type: String, } }); 
    ]]>
    MongDB upsert 时新增的字段不能更新值 tag:www.v2ex.com,2022-07-15:/t/866388 2022-07-15T04:17:47Z 2022-07-15T04:16:47Z wurenzhidi member/wurenzhidi 目标:在 mongoDB 的现有记录里,新增 TfuEigenvalue TevEigenvalue 两个字段,并赋值 问题:TfuEigenvalue TevEigenvalue 字段的值均为“” 解决过程: row.ObtainDate(string 类型)存在为""的情况, 此前未处理这个情况. 修复后,问题消失. 有问题的代码: for _, row := range rows { upsertFilter := bson.M{ "vin": row.VinNo, } //这里字符串转换为 time, 有问题 date, _ := time.ParseInLocation("2006-01-02 15:04:05", row.ObtainDate, time.Local) err := db.Update(upsertFilter, bson.M{"$set": DimVehicleT5{ Vin: row.VinNo, BrandID: row.BrandName, BrandName: row.BrandName, ModelID: row.CarSeriesCode, ModelName: row.CarSeriesCode, CarModel: row.ProductCode, TbjEigenvalue: row.TbjEigenvalue, TfuEigenvalue: row.TfuEigenvalue, TevEigenvalue: row.TevEigenvalue, ObtainDate: uint64(date.UnixNano()) / 1e3, ConfigName: row.ConfigName, }}, false, bson.M{"upsert": true}) if err != nil { log.Errorf("zyh5 db insert err:%v", err) continue } }

    求教: 为什么字符串解析的问题, 会影响其他两个字段的赋值呢? 我在本地也没复现出这个情况

    ]]>
    大佬们,请问如何通过 go.mongodb.org 的驱动允许集合分片啊,他的驱动不支持运行脚本吗? tag:www.v2ex.com,2022-07-13:/t/865949 2022-07-13T09:16:00Z 2022-07-13T09:14:00Z Liuwilliam1 member/Liuwilliam1 sh.enableSharding("database"); sh.shardCollection("database.collection", {"_id":"hashed"},false,{numInitialChunks:20*5});

    ]]>
    nodejs mongodb tag:www.v2ex.com,2022-07-01:/t/863484 2022-07-01T09:03:30Z 2022-07-01T05:02:30Z QGabriel member/QGabriel
    nodejs
    ------
    app.post('/GetMail', function (req, res) {
    Models.MAIL.find((err, items) => {
    console.log(items)
    })
    })

    db.js
    -----
    const mOngoose= require('mongoose')
    mongoose.connect('mongodb://localhost:27017/abc', { useNewUrlParser: true }, (err, db) => {
    console.log(db);
    if (err) {
    console.log('********** [数据库连接失败] **********')
    } else {
    console.log('********** [数据库连接成功] **********')
    }
    })
    const mail = new mongoose.Schema({
    code: Number,
    },
    {
    collection: 'mail'
    })
    const Models = {
    MAIL: mongoose.model('mail', mail),
    }
    module.exports = Models

    mongoDB
    -----
    数据库名字为 abc
    下面有 mail => 20220602
    想指定查询'20220602'表数据 总返回空呢 ]]>
    MongoDB 什么量子数据库 tag:www.v2ex.com,2022-06-22:/t/861420 2022-06-22T08:33:30Z 2022-06-22T22:10:22Z mlxy123123 member/mlxy123123 在 VPS 上起了一个 MongoDB 服务,存一些爬虫抓来的数据

    结果隔三差五就会丢失一部分 collection ,代码里没查出问题,加上每天都会重抓补齐数据,就一直没管

    前两天想着还是解决一下,用 Celery 做了一个每分钟执行的任务,检查 collections 里缺少某一张表就发消息报警

    结果三天过去了,数据都没再丢了

    MongoDB 是第一个使用概率云形式存储数据的数据库吗,一直被观察就会坍缩,不观察就会逸散的

    (其实是想问问丢数据的原因,和排查的方法)

    ]]>
    求助, Failed with error 'aborted',是什么情况,应该怎么处理 tag:www.v2ex.com,2022-05-30:/t/856173 2022-05-30T03:22:29Z 2022-05-30T03:21:29Z lyang member/lyang Migration Results for the last 24 hours: 36 : Success 1 : Failed with error 'aborted', from shard1 to shard3 1 : Failed with error 'aborted', from shard1 to shard2
    2022-05-30T10:46:18.407+0800 I SHARDING [conn1034] about to log metadata event into changelog: { _id: "rabbit-node1-2022-05-30T10:46:18.407+0800-62942ffa44ce915f01bbaa4d", server: "rabbit-node1", clientAddr: "ip:33616", time: new Date(1653878778407), what: "moveChunk.error", ns: "data.m", details: { min: { a: 27, originTime: 1646064091546 }, max: { a: 144, originTime: 1646092716120 }, from: "shard1", to: "shard2" } } 2022-05-30T10:47:26.036+0800 I SHARDING [conn1034] about to log metadata event into changelog: { _id: "rabbit-node1-2022-05-30T10:47:26.036+0800-6294303e44ce915f01bc43b2", server: "rabbit-node1", clientAddr: "ip:33616", time: new Date(1653878846036), what: "moveChunk.error", ns: "data.m", details: { min: { a: 144, originTime: 1646092716120 }, max: { a: 265, originTime: 1646098360020 }, from: "shard1", to: "shard3" } } 
    ]]>
    请教大佬: mongo4.2 多个事务修改 document 报 WriteConflict tag:www.v2ex.com,2022-04-09:/t/845839 2022-04-09T01:26:54Z 2022-04-09T03:44:02Z Liuwilliam1 member/Liuwilliam1 ps:我的业务确实要并发修改同一个 document 我看网上有两种解决方案

    1.比如修改 maxTransactiOnLockRequestTimeoutMillis=36000000 
    2.应用层限制,比如实现排队系统 

    请问大佬有什么好的建议吗,感谢感谢

    ]]>
    mongodb 数据全量加载到 redis,怎样提升速度? tag:www.v2ex.com,2022-03-07:/t/838648 2022-03-07T08:53:07Z 2022-03-08T02:14:09Z leebs member/leebs 假设 50w 数据,db 全量查询,再往 redis 里面塞,内存可能会爆。 db 分页查询,需要先 count ,是个耗时的操作,而且分批次插入,最终结果和数据库不一定一致(中间可能有其他删除修改的操作)。

    比如布隆过滤器,一般是怎么导入亿级数据的?

    ]]>
    请教一个查询统计的语句 tag:www.v2ex.com,2022-01-05:/t/826436 2022-01-05T12:08:36Z 2022-03-06T17:05:29Z aqtata member/aqtata 大概有这样的数据

    [{ "date": "20220101", "id": "aaa", "name": "jack" }, { "date": "20220101", "id": "aaa", "name": "tony" }, { "date": "20220102", "id": "aaa", "name": "jack1" }, { "date": "20220102", "id": "aaa", "name": "jack2" }, { "date": "20220102", "id": "bbb", "name": "jack3" }, { "date": "20220103", "id": "aaa", "name": "jack" }] 

    需要按日期分组,再统计出每天的数据总数,各字段分组后的数量,期望得到这样的结果

    date count id_count name_count 20220103 1 1 1 20220102 3 2 3 20220101 2 1 2 

    研究了下聚合查询,$group按照date分组后就没法统计其他字段的分组数量了。 还有个$push可以在分组时将其他字段压入到新数组,但是其中数据是重复的,而我只是想得到分组后的数量而已。

    目前是傻乎乎的查多次。可以一次查出来我期望的结果吗?

    ]]>
    Mongodb 怎样对 key 进行模糊查询? tag:www.v2ex.com,2021-12-30:/t/825366 2021-12-30T11:09:40Z 2021-12-30T12:13:48Z ptrees member/ptrees 直接上例子: { "groups": { "a": [ { "id": 1, "status": "open" }, { "id": 2, "status": "open" }, { "id": 3, "status": "closed" } ], "b": [ { "id": 4, "status": "closed" }, { "id": 5, "status": "open" }, { "id": 6, "status": "closed" } ] } }

    对这个数据,怎么实现类似 find({"groups.*.status": "closed"}) 这样的查询?

    ]]>
    请教 mongo 连接问题 tag:www.v2ex.com,2021-12-24:/t/824143 2021-12-24T01:54:27Z 2021-12-24T04:03:23Z wjx0912 member/wjx0912

    navicat 连接正常,mongo 命令行不带数据库名正常,但是后面加数据库名就失败了?

    这是什么错误呢。谢谢

    ]]>
    Mongodb 字符串转 ObjectId tag:www.v2ex.com,2021-12-23:/t/824113 2021-12-23T15:39:21Z 2021-12-23T17:49:23Z leebs member/leebs 原 MongoDB 数据包含 ObjectId 类型的字段。

    构建缓存的时候,将 MongoDB 数据通过 JSON.srtingify 存储到 redis 中,这时候 ObjectId 类型的字段都会变为字符串。

    取缓存的时候,如果我要获取 ObjectId 类型的字段,就必须手动做类型转换。

    有其他办法可以根据 Schema 自动做类型转换嘛?或者 JSON.stringify 不转换 ObjectId 类型。

    ]]>
    问个增量更新的办法。 tag:www.v2ex.com,2021-12-14:/t/822169 2021-12-14T09:01:50Z 2021-12-14T09:24:10Z wafm member/wafm 我爬了一段数据 想着增量更新进数据库 baidu google 无果,请问一下各位大佬有什么妙招?

    ]]>
    求助, MongoDB 中查找如何做到完全匹配 tag:www.v2ex.com,2021-12-08:/t/820812 2021-12-08T02:46:27Z 2021-12-08T04:33:30Z Rkls member/Rkls {
    "_id":"111",
    "bgtime":"bgtime",
    "edtime":"edtime",
    "key1":"val1",
    "key2":"val2",
    "key3":"val3"
    }

    我写的程序对应的 shell 命令是
    db.coll.update(
    {
    "bgtime":{$exists:true},
    "edtime":{$exists:true},
    "key1":"val1",
    "key2":"val2"
    },
    {
    $set:
    {
    "bgtime":"bgtime",
    "edtime":"edtime",
    "key1":"val1",
    "key2":"val2"
    }
    },
    {
    $upsert:true
    })
    我想要获得的结果是一条数据中除了"_id"外只有查询条件的 4 个 key ,如果查不到的这样的 document 的话,将 update 中的文档插入,但是我写的过滤文档因为查到了数据库中的上述的数据,所以就不会再做插入了,现在有两个想法:
    1.先使用 find_many ( mongocxx 中的 api )找到所有匹配到的文档,然后去和我的键值对做匹配,但是这样工作量比较大
    2.在每次插入数据的时候都多插入一条 key 为_index 的字段,里面的 value 为"key1val (此处设计一个分隔符) key2val2",这样写查询条件的时候直接拼接_index 的 value 也是可以的。
    现在就是想看看 mongodb 有没有提供这样一种方法,直接做到完全匹配到我查询的文档 ]]>
    MongoDB 事务 + Promise.all(),会有部分数据操作失败。这是为啥? tag:www.v2ex.com,2021-12-02:/t/819589 2021-12-02T08:59:09Z 2021-12-01T10:13:20Z IvanLi127 member/IvanLi127 下面两段代码,第一段只有部分记录能更新成功,第二段代码正常。有大佬知道啥原因吗?

     await Promise.all( tuples.map(async ([list, count]) => { // await this.listModel.findOne({ _id: list }); // 存在这行也正常,不存在的话就不正常。 await this.listModel.updateOne( { _id: list }, { $inc: { sampleCount: -count, }, }, { session }, ); }), ); 
     for (const [list, count] of tuples) { await this.listModel.updateOne( { _id: list }, { $inc: { sampleCount: -count, }, }, { session }, ); } 

    我排查了下,updateOne 方法都能返回修改成功一行数据,所以更新是成功的,但是最后事务执行完毕后只查到部分数据有正常更新。

    MongoDB 4.4 。

    ]]>
    mongodb 中的递归查找子目录 tag:www.v2ex.com,2021-12-01:/t/819342 2021-12-01T08:44:01Z 2021-12-01T10:24:37Z among member/among
    # 目录表 class TC_struct(Document): name = StringField() #目录名 parent = ObjectIdField() #上层目录的 id # 文件表 class TC_item(Document): # 所在目录 parent = ReferenceField(TC_struct) #所在的目录 

    根据目录,递归查找目录中的所有文件。

    #先找到所有的目录。path_id 为所选择目录的 id path_ls = recurs_path(TC_struct, path_id)

    #然后找到目录下的所有文件 qry_list = Q(parent__in=path_ls)

    #递归查找目录的方法。

    def recurs_path(tb_cls, path_id): rds = tb_cls.objects(parent=ObjectId(path_id)).only('id') rt = list() rt.append(ObjectId(path_id)) for rd in rds: # 递归查找子目录中的子目录 rt.extend(recurs_path(tb_cls, rd._id)) return rt 

    现在的问题是,如果目录结构很深,如有 4000 多个目录,在递归的时候,耗时特别长。

    有没有方法,可以提升递归时的效率。 根本的需求是:递归查找目录中的所有文件。

    ]]>
    求助 启动服务后执行程序就自动退出 tag:www.v2ex.com,2021-11-24:/t/817705 2021-11-24T09:27:08Z 2021-11-21T09:29:42Z QGabriel member/QGabriel {"t":{"$date":"2021-11-24T17:21:55.891+08:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
    {"t":{"$date":"2021-11-24T17:21:55.906+08:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
    {"t":{"$date":"2021-11-24T17:21:55.907+08:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."}
    {"t":{"$date":"2021-11-24T17:21:55.908+08:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":651,"port":27017,"dbPath":"../db/","architecture":"64-bit","host":"DuDU"}}
    {"t":{"$date":"2021-11-24T17:21:55.908+08:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.3","gitVersion":"913d6b62acfbb344dde1b116f4161360acd8fd13","modules":[],"allocator":"system","environment":{"distarch":"x86_64","target_arch":"x86_64"}}}}
    {"t":{"$date":"2021-11-24T17:21:55.909+08:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Mac OS X","version":"19.6.0"}}}
    {"t":{"$date":"2021-11-24T17:21:55.909+08:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"storage":{"dbPath":"../db/"}}}}
    {"t":{"$date":"2021-11-24T17:21:55.911+08:00"},"s":"W", "c":"STORAGE", "id":22271, "ctx":"initandlisten","msg":"Detected unclean shutdown - Lock file is not empty","attr":{"lockFile":"../db/mongod.lock"}}
    {"t":{"$date":"2021-11-24T17:21:55.912+08:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"../db/","storageEngine":"wiredTiger"}}
    {"t":{"$date":"2021-11-24T17:21:55.912+08:00"},"s":"W", "c":"STORAGE", "id":22302, "ctx":"initandlisten","msg":"Recovering data from the last clean checkpoint."}
    {"t":{"$date":"2021-11-24T17:21:55.912+08:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=7680M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
    {"t":{"$date":"2021-11-24T17:21:56.721+08:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637745716:721730][651:0x116fc9dc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 248 through 249"}}
    {"t":{"$date":"2021-11-24T17:21:56.784+08:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637745716:784976][651:0x116fc9dc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 249 through 249"}}
    {"t":{"$date":"2021-11-24T17:21:56.847+08:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637745716:847170][651:0x116fc9dc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 248/256 to 249/256"}}
    {"t":{"$date":"2021-11-24T17:21:56.848+08:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637745716:848367][651:0x116fc9dc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 248 through 249"}}
    {"t":{"$date":"2021-11-24T17:21:56.925+08:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637745716:925858][651:0x116fc9dc0], file:index-3-7599426911076859335.wt, txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 249 through 249"}}
    {"t":{"$date":"2021-11-24T17:21:56.966+08:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637745716:966567][651:0x116fc9dc0], file:index-3-7599426911076859335.wt, txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
    {"t":{"$date":"2021-11-24T17:21:56.966+08:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637745716:966632][651:0x116fc9dc0], file:index-3-7599426911076859335.wt, txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
    {"t":{"$date":"2021-11-24T17:22:04.603+08:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":8691}}
    {"t":{"$date":"2021-11-24T17:22:04.603+08:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
    {"t":{"$date":"2021-11-24T17:22:04.609+08:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}}
    {"t":{"$date":"2021-11-24T17:22:04.617+08:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
    {"t":{"$date":"2021-11-24T17:22:04.623+08:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
    {"t":{"$date":"2021-11-24T17:22:04.624+08:00"},"s":"W", "c":"CONTROL", "id":22138, "ctx":"initandlisten","msg":"You are running this process as the root user, which is not recommended","tags":["startupWarnings"]}
    {"t":{"$date":"2021-11-24T17:22:04.624+08:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]}
    {"t":{"$date":"2021-11-24T17:22:04.624+08:00"},"s":"W", "c":"CONTROL", "id":22184, "ctx":"initandlisten","msg":"Soft rlimits too low","attr":{"currentValue":256,"recommendedMinimum":64000},"tags":["startupWarnings"]}
    {"t":{"$date":"2021-11-24T17:22:04.645+08:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
    {"t":{"$date":"2021-11-24T17:22:04.649+08:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"../db/diagnostic.data"}}
    {"t":{"$date":"2021-11-24T17:22:04.651+08:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
    {"t":{"$date":"2021-11-24T17:22:04.651+08:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}}
    {"t":{"$date":"2021-11-24T17:22:04.651+08:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
    {"t":{"$date":"2021-11-24T17:22:07.247+08:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:49394","connectionId":1,"connectionCount":1}}
    {"t":{"$date":"2021-11-24T17:22:07.248+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"127.0.0.1:49394","client":"conn1","doc":{"driver":{"name":"PyMongo","version":"3.11.2"},"os":{"type":"Darwin","name":"Darwin","architecture":"x86_64","version":"10.15.7"},"platform":"CPython 3.8.7.final.0"}}}
    {"t":{"$date":"2021-11-24T17:22:07.249+08:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:49395","connectionId":2,"connectionCount":2}}
    {"t":{"$date":"2021-11-24T17:22:07.250+08:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:49396","connectionId":3,"connectionCount":3}}
    {"t":{"$date":"2021-11-24T17:22:07.250+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"127.0.0.1:49395","client":"conn2","doc":{"driver":{"name":"PyMongo","version":"3.11.2"},"os":{"type":"Darwin","name":"Darwin","architecture":"x86_64","version":"10.15.7"},"platform":"CPython 3.8.7.final.0"}}}
    {"t":{"$date":"2021-11-24T17:22:07.250+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn3","msg":"client metadata","attr":{"remote":"127.0.0.1:49396","client":"conn3","doc":{"driver":{"name":"PyMongo","version":"3.11.2"},"os":{"type":"Darwin","name":"Darwin","architecture":"x86_64","version":"10.15.7"},"platform":"CPython 3.8.7.final.0"}}}
    {"t":{"$date":"2021-11-24T17:22:15.965+08:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:49422","connectionId":4,"connectionCount":4}}
    {"t":{"$date":"2021-11-24T17:22:15.965+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn4","msg":"client metadata","attr":{"remote":"127.0.0.1:49422","client":"conn4","doc":{"application":{"name":"robo3t-1.4.2"},"driver":{"name":"MongoDB Internal Client","version":"4.2.6-18-g6cdb6ab"},"os":{"type":"Darwin","name":"Mac OS X","architecture":"x86_64","version":"19.6.0"}}}}
    {"t":{"$date":"2021-11-24T17:22:16.005+08:00"},"s":"I", "c":"NETWORK", "id":23018, "ctx":"listener","msg":"Error accepting new connection on local endpoint","attr":{"localEndpoint":"127.0.0.1:27017","error":"Too many open files"}}
    {"t":{"$date":"2021-11-24T17:22:16.631+08:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"thread5","msg":"WiredTiger error","attr":{"error":24,"message":"[1637745736:631536][651:0x7000060ff000], log-server: __directory_list_worker, 46: ../db//journal: directory-list: opendir: Too many open files"}}
    {"t":{"$date":"2021-11-24T17:22:16.631+08:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"thread5","msg":"WiredTiger error","attr":{"error":24,"message":"[1637745736:631685][651:0x7000060ff000], log-server: __log_prealloc_once, 505: log pre-alloc server error: Too many open files"}}
    {"t":{"$date":"2021-11-24T17:22:16.631+08:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"thread5","msg":"WiredTiger error","attr":{"error":24,"message":"[1637745736:631712][651:0x7000060ff000], log-server: __log_server, 961: log server error: Too many open files"}}
    {"t":{"$date":"2021-11-24T17:22:16.631+08:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"thread5","msg":"WiredTiger error","attr":{"error":-31804,"message":"[1637745736:631736][651:0x7000060ff000], log-server: __log_server, 961: the process must exit and restart: WT_PANIC: WiredTiger library panic"}}
    {"t":{"$date":"2021-11-24T17:22:16.631+08:00"},"s":"F", "c":"-", "id":23089, "ctx":"thread5","msg":"Fatal assertion","attr":{"msgid":50853,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp","line":520}}
    {"t":{"$date":"2021-11-24T17:22:16.631+08:00"},"s":"F", "c":"-", "id":23090, "ctx":"thread5","msg":"\n\n***aborting after fassert() failure\n\n"}
    {"t":{"$date":"2021-11-24T17:22:16.631+08:00"},"s":"F", "c":"CONTROL", "id":4757800, "ctx":"thread5","msg":"Writing fatal message","attr":{"message":"Got signal: 6 (Abort trap: 6).\n"}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31431, "ctx":"thread5","msg":"BACKTRACE: {bt}","attr":{"bt":{"backtrace":[{"a":"10BA5DB9C","b":"1098F2000","o":"216BB9C","s":"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE","s+":"10C"},{"a":"10BA5F2A8","b":"1098F2000","o":"216D2A8","s":"_ZN5mongo15printStackTraceEv","s+":"28"},{"a":"10BA5CDDB","b":"1098F2000","o":"216ADDB","s":"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP9__siginfoPv","s+":"BB"},{"a":"7FFF6B6465FD","b":"7FFF6B643000","o":"35FD","s":"_sigtramp","s+":"1D"},{"a":"0"},{"a":"7FFF6B51C808","b":"7FFF6B49D000","o":"7F808","s":"abort","s+":"78"},{"a":"10BA442D7","b":"1098F2000","o":"21522D7","s":"_ZN5mongo25fassertFailedWithLocationEiPKcj","s+":"197"},{"a":"1099BF6FB","b":"1098F2000","o":"CD6FB","s":"_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc","s+":"1FB"},{"a":"109B2AAE7","b":"1098F2000","o":"238AE7","s":"__eventv","s+":"607"},{"a":"109B2AD66","b":"1098F2000","o":"238D66","s":"__wt_panic_func","s+":"FD"},{"a":"109A4659E","b":"1098F2000","o":"15459E","s":"__log_server","s+":"44E"},{"a":"7FFF6B652109","b":"7FFF6B64C000","o":"6109","s":"_pthread_start","s+":"94"},{"a":"7FFF6B64DB8B","b":"7FFF6B64C000","o":"1B8B","s":"thread_start","s+":"F"}],"processInfo":{"mongodbVersion":"4.4.3","gitVersion":"913d6b62acfbb344dde1b116f4161360acd8fd13","compiledModules":[],"uname":{"sysname":"Darwin","release":"19.6.0","version":"Darwin Kernel Version 19.6.0: Thu Oct 29 22:56:45 PDT 2020; root:xnu-6153.141.2.2~1/RELEASE_X86_64","machine":"x86_64"},"somap":[{"path":"/Users/quanwei/Desktop/mongodb/bin/./mongod","machType":2,"b":"1098F2000","vmaddr":"100000000","buildId":"88F05A2CDBD83B9F98DAF635FC65C2E6"},{"path":"/usr/lib/system/libsystem_c.dylib","machType":6,"b":"7FFF6B49D000","vmaddr":"7FFF67253000","buildId":"BBDED5E6A6463EEDB33A91E4331EA063"},{"path":"/usr/lib/system/libsystem_platform.dylib","machType":6,"b":"7FFF6B643000","vmaddr":"7FFF673F9000","buildId":"009A7C1F313A318EB9F230F4C06FEA5C"},{"path":"/usr/lib/system/libsystem_pthread.dylib","machType":6,"b":"7FFF6B64C000","vmaddr":"7FFF67402000","buildId":"62CB1A980B8F31E7A02BA1139927F61D"}]}}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"10BA5DB9C","b":"1098F2000","o":"216BB9C","s":"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE","s+":"10C"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"10BA5F2A8","b":"1098F2000","o":"216D2A8","s":"_ZN5mongo15printStackTraceEv","s+":"28"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"10BA5CDDB","b":"1098F2000","o":"216ADDB","s":"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP9__siginfoPv","s+":"BB"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"7FFF6B6465FD","b":"7FFF6B643000","o":"35FD","s":"_sigtramp","s+":"1D"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"0"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"7FFF6B51C808","b":"7FFF6B49D000","o":"7F808","s":"abort","s+":"78"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"10BA442D7","b":"1098F2000","o":"21522D7","s":"_ZN5mongo25fassertFailedWithLocationEiPKcj","s+":"197"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"1099BF6FB","b":"1098F2000","o":"CD6FB","s":"_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc","s+":"1FB"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"109B2AAE7","b":"1098F2000","o":"238AE7","s":"__eventv","s+":"607"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"109B2AD66","b":"1098F2000","o":"238D66","s":"__wt_panic_func","s+":"FD"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"109A4659E","b":"1098F2000","o":"15459E","s":"__log_server","s+":"44E"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"7FFF6B652109","b":"7FFF6B64C000","o":"6109","s":"_pthread_start","s+":"94"}}}
    {"t":{"$date":"2021-11-24T17:22:16.641+08:00"},"s":"I", "c":"CONTROL", "id":31427, "ctx":"thread5","msg":" Frame: {frame}","attr":{"frame":{"a":"7FFF6B64DB8B","b":"7FFF6B64C000","o":"1B8B","s":"thread_start","s+":"F"}}}
    zsh: abort sudo ./mongod --dbpath ../db/ ]]>
    列了 MongoDB 的 Golang 使用,注入攻击防范需要注意的点,请大家看看还有其他需要注意的没? tag:www.v2ex.com,2021-11-20:/t/816718 2021-11-20T02:37:11Z 2021-11-20T14:55:10Z vvhhaaattt member/vvhhaaattt
  • 看到相关 php+mongodb 的示例,看漏洞原因,感觉主要 php 动态语言对参数进行解析变为了 php 中的合法复杂类型,后传递给 mongodb 的驱动。这样用户输入可能解析为合法的列表之类的 php 对象,以及未区分 bson 中的 String 跟 Javascript 类型,从而造成注入。
  • 我看了下 Golang 的 MongoDB 官方驱动,BSON 对 Javascript 类型的数据是跟 string 分开进行处理的,对 string 有做转义,而 golang 中输入一般是 string 类型的。
  • 问题: 那在 Golang 使用 MongoDB 过程中,如果 MongoDB 查询不传拼接后的 Javascript ,只有单纯的 string ,是否可以防止相关注入呢?

    ]]>
    求助帖 tag:www.v2ex.com,2021-10-29:/t/811443 2021-10-29T02:13:44Z 2021-10-29T05:38:22Z yinft member/yinft mongodb 里面 对标 mysql 的 case when 是怎么查询的,找了半天没找到实用的。用 mongotemplate 怎么写,麻烦有没有大佬给个资料或者直接指点一下。 4346e3eb11bd4d389afeabbd916c121.png

    我想按照这个 startTime ,每天分组求和, 把 6 点到 9 点的 flowInNum 字段数据求和然后给个新字段 monring 默认是早上 把 11 点到 13 点的 flowInNum 字段数据求和然后给个新字段 afternoon 默认是中午 把 18 点到 20 点的 flowInNum 字段数据求和然后给个新字段 night 默认是晚上

    ]]>
    mongodb 查询问题 tag:www.v2ex.com,2021-10-28:/t/811256 2021-10-28T06:27:41Z 2021-10-28T08:44:21Z yinft member/yinft mongodb 里面 对标 mysql 的 case when 是怎么查询的,找了半天没找到实用的。用 mongotemplate 怎么写,麻烦有没有大佬给个资料或者直接指点一下。 4346e3eb11bd4d389afeabbd916c121.png

    我想按照这个 startTime ,每天分组求和, 把 6 点到 9 点的 flowInNum 字段数据求和然后给个新字段 monring 都默认是早上 把 11 点到 13 点的 flowInNum 字段数据求和然后给个新字段 afternoon 都默认是中午 把 18 点到 20 点的 flowInNum 字段数据求和然后给个新字段 night 都默认是晚上

    ]]>
    mongodb 连接加上 auth 认证后,数据库切换会花掉 0.5 秒 ,多个数据库间切换时会很慢。 tag:www.v2ex.com,2021-10-05:/t/806007 2021-10-05T14:41:05Z 2021-10-05T17:41:05Z sunhk25 member/sunhk25 没加认证时很快( 0.0 秒级),但是加上 auth 后每个 DB 的切换都会花去 0.5 秒左右。
    roles = root,readWriteAnyDatabase 都试了下,结果都一样。
    请问什么参数设置课题提高速度吗。 ]]>
    百思不得其解的 MongoDB 问题:服务正常,但连接不上 tag:www.v2ex.com,2021-09-17:/t/802474 2021-09-17T02:17:54Z 2021-09-17T13:11:22Z hvboekml member/hvboekml 我有个网站用的是 MongoDB 数据库,搞不懂的地方在于网站正常,但是 mongo 连不上。

    最近没动过配置,不敢贸然重启服务,求教下各位

    ]]>
    非引战,大家用 mongodb 是什么业务用到的,跟关系型数据库比起来为什么要选 mongodb 呢 tag:www.v2ex.com,2021-09-03:/t/799670 2021-09-03T06:24:03Z 2021-09-03T16:05:22Z zxCoder member/zxCoder 感觉 js/ts 和 mongodb 整合还是蛮舒服的,可以直接把一个 js 对象存进数据库里,不需要其他的设置,听有些人说速度比关系型快,不知道靠不靠谱

    ]]>
    ubao snddm index pchome yahoo rakuten mypaper meadowduck bidyahoo youbao zxmzxm asda bnvcg cvbfg dfscv mmhjk xxddc yybgb zznbn ccubao uaitu acv GXCV ET GDG YH FG BCVB FJFH CBRE CBC GDG ET54 WRWR RWER WREW WRWER RWER SDG EW SF DSFSF fbbs ubao fhd dfg ewr dg df ewwr ewwr et ruyut utut dfg fgd gdfgt etg dfgt dfgd ert4 gd fgg wr 235 wer3 we vsdf sdf gdf ert xcv sdf rwer hfd dfg cvb rwf afb dfh jgh bmn lgh rty gfds cxv xcv xcs vdas fdf fgd cv sdf tert sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf shasha9178 shasha9178 shasha9178 shasha9178 shasha9178 liflif2 liflif2 liflif2 liflif2 liflif2 liblib3 liblib3 liblib3 liblib3 liblib3 zhazha444 zhazha444 zhazha444 zhazha444 zhazha444 dende5 dende denden denden2 denden21 fenfen9 fenf619 fen619 fenfe9 fe619 sdf sdf sdf sdf sdf zhazh90 zhazh0 zhaa50 zha90 zh590 zho zhoz zhozh zhozho zhozho2 lislis lls95 lili95 lils5 liss9 sdf0ty987 sdft876 sdft9876 sdf09876 sd0t9876 sdf0ty98 sdf0976 sdf0ty986 sdf0ty96 sdf0t76 sdf0876 df0ty98 sf0t876 sd0ty76 sdy76 sdf76 sdf0t76 sdf0ty9 sdf0ty98 sdf0ty987 sdf0ty98 sdf6676 sdf876 sd876 sd876 sdf6 sdf6 sdf9876 sdf0t sdf06 sdf0ty9776 sdf0ty9776 sdf0ty76 sdf8876 sdf0t sd6 sdf06 s688876 sd688 sdf86