看 v 友推荐 immich 自己就用 docker 跑了一个,然鹅醒来 immich-server 这个服务 crash 了,请问有遇到过的朋友吗?

360 天前
 YaD2x

按照文档,机器的配置是完全够的,但是日志里好像提到 js 内存的问题的问题,在 github 找到了相关的错误, 给变量里增加了如下配置,但是问题没有解决,怕内存不够,我又尝试加到 12G 还是不行。。。

NODE_OPTIONS="--max-old-space-size=8192"
[befw@ryzen immich-app]$ sudo docker logs -f 1ac655d420b1
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [NestFactory] Starting Nest application...
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] TypeOrmModule dependencies initialized +32ms
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] BullModule dependencies initialized +0ms
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] DiscoveryModule dependencies initialized +0ms
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] ScheduleModule dependencies initialized +0ms
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] ConfigModule dependencies initialized +5ms
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] BullModule dependencies initialized +0ms
[Nest] 7  - 11/19/2023, 3:54:09 AM     LOG [InstanceLoader] BullModule dependencies initialized +0ms
[Nest] 7  - 11/19/2023, 3:54:10 AM     LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +312ms
[Nest] 7  - 11/19/2023, 3:54:10 AM     LOG [InstanceLoader] TypeOrmModule dependencies initialized +1ms
[Nest] 7  - 11/19/2023, 3:54:10 AM     LOG [InstanceLoader] InfraModule dependencies initialized +4ms
[Nest] 7  - 11/19/2023, 3:54:10 AM     LOG [InstanceLoader] DomainModule dependencies initialized +22ms
[Nest] 7  - 11/19/2023, 3:54:10 AM     LOG [InstanceLoader] MicroservicesModule dependencies initialized +0ms

<--- Last few GCs --->

[7:0x22d82310000]   789506 ms: Scavenge 12090.3 (12318.2) -> 12076.6 (12318.2) MB, 31.97 / 0.00 ms  (average mu = 0.089, current mu = 0.026) allocation failure; 
[7:0x22d82310000]   789681 ms: Scavenge 12092.3 (12318.7) -> 12080.2 (12320.7) MB, 36.14 / 0.00 ms  (average mu = 0.089, current mu = 0.026) allocation failure; 


<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0xc99960 node::Abort() [immich_microservices]
 2: 0xb6ffcb  [immich_microservices]
 3: 0xebe910 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [immich_microservices]
 4: 0xebebf7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [immich_microservices]
 5: 0x10d06a5  [immich_microservices]
 6: 0x10d0c34 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [immich_microservices]
 7: 0x10e7b24 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [immich_microservices]
 8: 0x10e833c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [immich_microservices]
 9: 0x10be641 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [immich_microservices]
10: 0x10bf7d5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [immich_microservices]
11: 0x109cd46 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [immich_microservices]
12: 0x14f7b76 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [immich_microservices]
13: 0x7f11c4059ef6 
1503 次点击
所在节点    NAS
8 条回复
CHEN1016
359 天前
感觉没有群晖自带的稳定,又换回群晖了
ixdeal
359 天前
我的很稳定,一点问题都没有,用 CF tunnel 映射到自己家中 PVE 上,除了速度慢点,在外面可以随时备份视频/图片回来,https://photos.ixdeal.com
totoro625
359 天前
immich 更新非常频繁,别人能用不代表你就能用
官方提示:该项目正在非常积极的开发中。预计会出现错误和更改。不要将其用作存储照片和视频的唯一方式!

我遇到的问题是:一次性丢入照片过多(大于 1w 张)时会崩,配置是 11400 12G 内存
ZXiangQAQ
359 天前
我在 k8s 遇见过类似问题,好多 java 服务写的逻辑,会自动调节 cpu 和内存,尽可能高的利用资源,但是容器内获取的 cpu 和内存,不是容器 limit 限制的,而是容器所在的物理机/虚拟机的,然后就被申请超被容器干掉
silverzidan
359 天前
@ixdeal immich 不建议暴露在公网吧
YaD2x
359 天前
@ixdeal 我还不会啥 cf tunnel 直接 zerotier 组网的,准备把安全方面在提升一下。
YaD2x
359 天前
@ZXiangQAQ 可能就是资源不够了,今天重启又好了,先观察观察
ixdeal
359 天前
@YaD2x #6 zerotire 更安全吧,纯内网。

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/993194

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX