𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 

Ceterum Lemmi necessitates reactiones

  • 0 Posts
  • 42 Comments
Joined 3 years ago
cake
Cake day: August 26th, 2022

help-circle

  • I like this idea, but with the increase in supply chain attacks, I’m reluctant to use it. I’ve been much more reticent about installing from AUR, and my use of github projects has drastically slowed down since I now feel as if I have to read all the source code for everything I get.

    I’ve sandboxed programs before, and I may just start making that standard practice, but still… it makes me angry. It’s, like: this is why we can’t have nice things. There are precious few OSS supply chain static code analysis tools, and there are a lot of languages I don’t know well enough to review, or which have such broad or deep dependency trees that it’s more work than it’s worth. The most frustrating is the dampening effect it’s having on OSS. It only pushes people to only use programs from big commercial companies.

    Anyway, none of that is directly related to your program, which is really cool. Sadly, if there aren’t any positive developments in the OSS ecosystem for attacking the supply chain problem, cool projects like this are not going into my toolbox.


  • The idea is that blkdiscard will tell the SSD’s own controller to zero out everything

    Just to be clear, blkdiscard alone does not zero out anything; it just marks blocks as empty. --secure tells compatible drives to additionally wipe the blocks; -z actually zeros out the contents in the blocks like dd does. The difference is that - without the secure or z options - the data is still in the cells.

    always encrypt all of your storage

    Yes! Although, I don’t think hindsight is helpful for OP.


  • Hm? Both bspwm and herbstluftwm have tabbed layouts. It’s been so long since I’ve used i3, but it has them too, right? Sway’s a mostly config-compatible, mostly client compatible i3 clone for Wayland, so I’d expect it to have tabs, too. As well as floating windows, which every tabbing WM I’ve used also supports.

    I think I missed your point. What are you saying? Did I say something that made you think I thought tiling WMs could only do tiling?

    What I’m opinionated about is configuration files. Technically, even a desktop could be configuration-less, although I’ve never seen one. I have become insistent that my WM have no configuration that isn’t set through a client call. Sway still uses a config file like i3; mostly the same config file, unless it’s drifted significantly. That was Sway’s whole killer feature: i3 users could switch from X11 to Wayland with only minor configuration file changes.


  • Sorry, it wasn’t the Arch wiki. It was this page.

    I hate using Stack Exchange as a source of truth, but the Arch wiki references this discussion which points out that not all SSDs support “Deterministic read ZEROs after TRIM”, meaning a pure blkdiscard is not guaranteed to clear data (unless the device is advertised with that feature), leaving it available for forensics. Which means having to use --secure, which is (also) not supported by all devices, which means having to use -z, which the previous source claims is equivalent to dd if=/dev/zero.

    So the SSD is hiding extra, inaccessible, cells. How does blkdiscard help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells with blkdiscard? The paper you referenced does not mention blkdiscard directly as that’s a Linux-specific command, but other references imply or state it’s just calling TRIM. That same paper, in a footnote below section 3.3, claims TRIM adds no reliable data security.

    It looks like - especially from that security paper - that the cells are inaccessible and not reliably clearable by any mechanism. blkdiscard then adds no security over dd, and I’d be interested to see whether, with -z, it’s any faster than dd since it perforce would have to write zeros to all blocks just the same, rather than just marking them “discarded”.

    I feel that, unless you know the SDD supports secure trim, or you always use -z, dd is safer, since blkdiscard can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.




  • I have no doubt ZFS is solid, too, FWIW. I leaned toward btrfs because it was simple, the commands straightforward and clear, nothing required more than one step - this is all super valuable to me because there are other things I want to spend my time on than fiddling with the filesystem.

    @ikidd@lemmy.world said that btrfs is poor at software RAID.

    You should check for yourself. I haven’t used software RAID in years - RAID 0+1 gives me no value - but the btrfs team and Arch wiki say 0, 1, and 10 are solid. You should not use 5 or 6, as they’re known to be buggy and even the btrfs man page tells you to not use it. So, yeah? btrfs is poor at RAID 5/6; to my understanding, it’s good at 0/1/10.

    btrfs can do encryption, compression, snapshots, and some RAID. I found combining mdadm and lvm and FS built a jenga tower, of which if part failed, the entire end result was borked. I once did an OS upgrade and lost the mdadm config, and spent two days recreating it. I never used it on a new machine after that. Separation of concerns is great, but having an all-in-one that can self repair and boot into snapshots is better.

    I can’t speak to performance. No doubt Toms of someone like that’s looked into that in detail.




  • IME, btrfs is easier to work with than ZFS. It has all of the features you asked for; its RAID ≠ 0/1/10 are buggy, but 0/1/10 are considered reliable. In the past year, I heard a rumor that they were going to announced RAID > 1 to be also stable, but that’s hearsay; I haven’t read anything authoritative on the subject - the Arch btrfs page and the btrfs man page both still say 5/6 are not reliable.

    I’ve been using btrfs on a variety of computers and VMs, from tiny little ODROIDS, to laptops, to VPSes, to desktops for… over a decade? I’ve had much better reliability than ext4. I was attracted to the POLS of the commands, vs ZFS.

    I don’t know how much my opinion weighs; I have a feeling a data center person would suggest ZFS as being more “enterprise”. I’ve been really happy with it. I’ve been watching bcachefs for the caching and target options - really neat features useful for home gamers - but otherwise I wouldn’t bother - btrfs has been solid and done everything I could want. It was a huge upgrade from mdadm and lvm in UX, and was only possible when disks got so cheap they outpaced my need for RAID5, and I could afford multiple backup drives that held years worth of nightly incremental backups.






  • I did some light reading. I see claims that wear leveling only ever writes only to zeroed sectors. Let me get this straight:

    If I have a 1TB ssd, and I write 1TB of SecretData, and then I delete and write 1TB of garbage to the disk, it’s not actually holding 2TB of data, with the SecretData hidden underneath wear leveling? That’s the claim? And if I overwrite that with another 1TB of garbage it’s holding, what now, 3TB of data? Each data sequence hidden somehow by the magic of wear leveling?

    Skeptical Ruaraidh is skeptical. Wear leveling ensures data on an SSD is written to free sectors with the lowest write count. It can’t possibly be retaining data if data the maximum size of the device is written to it.

    I see a popular comment on SO saying you can’t trust dd on SSDs, and I challenge that: in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data. Otherwise, someone’s invented the storage version of a perpetual motion device. To be safe, sync and read it, and maybe dumb again, but I really can’t see how an SSD world hold more data than it can.

    dd if=/dev/random of=/dev/sdX bs=2048 count=524288

    If you’re clever enough to be using zsh as your shell:

    repeat 3 (dd if=/dev/random of=/dev/sdX bs=2048 count=524288 ; sync ; dd if=/dev/sdX ba=2048)

    You reduce every single cell’s write lifespan by 2 times; with modern life spans of 3,000-100,000 writes per cell, it’s not significant.

    Someone mentioned blkdiscard. If you really aren’t concerned about forensic analysis, this is probably the fastest and least impactful answer: it won’t affect cell lives by even a measly 2 writes. But it also doesn’t actually remove the data, it just tells the SSD that those cells are free and empty. Probably really hard to reconstruct data from that, but also probably not impossible. dd is a shredding option: safer, slower, and with a tiny impact on drive lifespan.


  • Educate me.

    My response would normally be: dd if=/dev/random of=/dev/sdX ba=1024M, followed by a sync. Lowest common denominator nearly always wins in my book over specialty programs that aren’t part of minimal core; tools that also happen to be in BusyBox are the best.

    What makes this situation special enough for something more complex than dd? Do SSDs not actually save the data you tell them to? I’m trying to guess at how writing a disk’s worth of garbage directly to the device would fail. I’m imagining some holographic effect, where you can actually store more data than the drive holds. A persistent, on disk cache that has no way of directly affecting, but which can somehow be read later and can hold latent data?

    If I were really, I’d dd, read the entire disk to /dev/null, then dd again. How would this not be sufficient?

    I’m honestly trying to figure out what the catch is, here, and why this was even a question - OP doesn’t sound like a novice.


  • I have to put in a plug for herbstluftwm.

    It really depends on whether you like the keyboard and tiling widow managers, or if you like dragging windows around and resizing them. Tiling widow managers are popular, but they’re definitely a taste.

    hlwm and bspwm are a - “configurationless” breed - I think river on Wayland is the same. This has become my one requirement for a window manager. Every configuration is done through a command line client call, and it’s game changing. The “configuration” is just a specific shell script hlwm runs when it starts up, and it’s full of whatever client calls needed to configure the system. Every call in that script can be run outside the script; it’s literally a just shell script. I run all sorts of things in that script: launching “desktoppy” programs like kanata, setx, autostart programs that start on a specific screen; one script lays out one screen in a complex 2x1 layout where each pane is tabbed and contains three terminals each, and then launches terminals that connect to various remote computers - that’s my “remote server” screen, and it’s all set up when I log in.

    However - definitely for tiling enthusiasts. I used i3 for a decade before I found bspwm, which converted me to configurationless WMs, and I ended up with hlwm. It’s honestly what’s preventing me from giving Wayland a serious go, although river might do the trick.



  • Edit I haven’t tried this myself, but from what I can find the gparted part is not necessary. You can get rid of Windows and re-use it for Linux with a single command: btrfs device add / /dev/old_windows_partition. The rest of the considerations below still apply.

    The answer to the question you asked is: make sure you know which partition it is and run dd if=/dev/random of=/dev/<partition> bs=1024. Then you’ll probably want to find which boot loader you’re using and remove the Windows option. That will delete Windows.

    To re-use the free space, which most folks are focusing on, might be far easier than all of the other comments.

    Odds are decent that you’re using btrfs. Most reasonable Linux distros default to it, so unless you changed it, it’s probably btrfs. With btrfs, you can simply change the position type and add it to your existing filesystem.

    1. Use the program gparted. You can do all of this on the command line with fdisk, but gparted is a GUI program and is easier if you’re more comfortable with GUIs. Find the Windows partition, make sure you now it’s the Windows partition and not the boot partition (the boot partition will be the really tiny one), click on the Windows partition and choose the “change partition type” function to switch it to a Linux partition. There will be warnings; heed them, double check, and then save and exit.
    2. Add the old Windows partition to your existing filesystem with: btrfs device add / /dev/sdx2 . This adds the partition /dev/sdx2 to the filesystem mounted at / – your root partition. Replace /dev/sdx2 with whatever partition Windows used to be on.

    That’s it. Now your Linux filesystem is using the old Windows partition. Without changing the boot options, when you reboot your system may still believe there’s a Windows to boot into. If you’re using EFI, it should just disappear, but with grub you’ll have to tell grub that Windows isn’t there anymore or else it’ll keep offering it to you at each boot.

    You are almost certainly not using RAID, so you don’t need to worry about rebalancing.

    Summary: it is very likely your distribution used btrfs for your Linux partition. In that case, the absolute easiest way to get rid of Windows and use it for Linux is to add the partition to your btrfs filesystem. No reformatting, repartitioning, reinstalling; just tell btrfs to use it and you’re done.