![]() So it looks like my thought process was at least legitimate, that the issues there are truly thorny and ZFS had to design around them. Now that I understand that we have to make the choice to put all metadata on the special vdev from the beginning, that clearly sidesteps these thorny issues, and instead we need to provision for redundancy within the special vdev as losing the vdev means a lost pool, the same situation as with any other vdev. ![]() Then what would happen when you cut power? … I had typed up a whole line of questions before I did more reading and realized these drawbacks of the special vdev – my line of reasoning was this: If we could put our metadata on NVMe, and if we also got the (desired) redundancy for metadata on the other HDD vdevs, then there are all these questions about since the added speed would allow for the discrepancy between the metadata content on these devices to grow potentially unbounded if we slam the pool with write operations. Does this also mean that in such a configuration the other vdevs would not get metadata?.Redundancy for the metadata must be done at this vdev? This is a huge drawback and added complexity.Looks like if we want to set up the special vdev (e.g. I’ve got some dumb questions… Sorry if it’s been already addressed in responses. I will certainly say people should think twice, if they really need it and make sure they cannot manage with just a l2arc setup instead. Having to replace it, keeping it as a mirror… so many annoying downsides about the special small block device… I mean it can be a lot of trouble having a special small block device, while the l2arc one can basically pull will the pool is live and it doesn’t matter because all the data is replicated on the data storage vdevs. I’m very happy with my special small block device because it soaks up all the small io.īut i’m sure there are many cases where recommending secondarycache=metadata would be the best choice… Ofc the benefit from the special small block device is the ability to also store small files rather than metadata only, would be nice if the l2arc had a similar feature, where one can define the max size of the records stored in it. ![]() I suppose that is why the option to do only metadata on the special small block device doesn’t exist. The special small block device is certainly a far more risky solution than just setting secondarycache=metadata. I actually prefer L2ARC over special vdevs for my server, but the case for non-evictable metadata stored on fast SSD is a strong one for special vdevs, although I don’t personally experience this in my use case as I got plenty of ARC+(persistant)L2ARC I actually prefer L2ARC over special vdevs for my server, but the case for non-evictable metadata stored on fast SSD is a strong one for special vdevs, although I don’t personally experience this in my use case as I got plenty of ARC+(persistant)L2ARCĮdit: Module Parameters - OpenZFS documentation special vdevs are a solution, but ARC and L2ARC should be able to cache all your metadata except for very edge cases or if you went cheap on memory. I would advice against primarycache=metadata for obvious reasons. Other than that, there are also things like secondarycache=metadata if you want more metadata cached. you may find that this inflated amount of metadata is more than your default ARC can handle. If you are working with small recordsizes and the corresponding huge amounts of blockpointers, etc. But zfs documentation should give you more insights there. I don’t use it myself, so I don’t have the tunable at hand. But if you see metadata getting evicted to often, you can change the values via tunables. There is a fraction of the ARC size reserved for metadata. Does ARC already cache metadata? Could we do something like:
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |