Howdy All! I recently got a bitchin’ new SSD, a Samsung 990 EVO Plus 4TB and I am struggle bussing trying to make it my new boot drive on my computer while keeping all of my programs and settings and things just the way I like them. Specs are I7 13700K cpu and an RTX 4070 gpu plugged into an MSI MAG Z790 Tomahawk Wifi mobo all working harmoniously to run Opensuse Tumbleweed.
Things I have done so far:
-
Googled that shit, didn’t find much that helped me unfortunately. Found some forum where a guy was trying to move over to an SSD from a HDD and then remove the HDD, whereas I just want to change the boot drive to SSD and continue using both drives in the same rig. Someone else in that thread recommended clonezilla but then further down I read something about UUIDs(?) being copied as well and being unable to use both drives in the same computer or it can cause issues and corrupt data. That scared me off that.
-
Tried using the Yast Partitioner tool but the scary warning box it makes you click through and the general lack of any clue what I’m doing scared me off that.
-
Decided to just fresh install Opensuse Tumbleweed onto SSD with usb and then mount the HDD so that I can just copy everything over that way. Or so I thought. First I ran into the issue of the /home located in HDD not being viewable by my user on the SSD, I guess. Fixed that by unmounting the drive and remounting it with the following appended to the end of the mount command ‘-o subvol=/’ , I got that from google as well. Now I’m able to view things in /home on HDD from the user on SSD and I’ve even copied some things over. However I’m unable to access the .snapshots folder in the root directory of HDD which I intended to copy over the latest snapshot and use it on the SSD install to bring all of my non /home stuff over.
So I’m kinda stuck in the middle of transferring over now. I have an inclination toward being lazy so I don’t really want to spend time installing all of the flatpaks and configuring the OS again if I don’t have to. Mostly because I’ve already had one false start with Linux and went ahead and started fresh so this would be the third time having to set everything up again from scratch. Any help or suggestions are greatly appreciated!
Screenshot of screen ssd boots to currently
spoiler
results of sudo btrfs subvolume list newboot:
spoiler
mint@mint:~$ sudo btrfs subvolume list newboot
ID 256 gen 16336 top level 5 path @
ID 257 gen 16344 top level 256 path @/var
ID 258 gen 16342 top level 256 path @/usr/local
ID 259 gen 16336 top level 256 path @/srv
ID 260 gen 16341 top level 256 path @/root
ID 261 gen 16336 top level 256 path @/opt
ID 262 gen 16344 top level 256 path @/home
ID 263 gen 16163 top level 256 path @/boot/grub2/x86_64-efi
ID 264 gen 16163 top level 256 path @/boot/grub2/i386-pc
ID 265 gen 16327 top level 256 path @/.snapshots
ID 266 gen 16345 top level 265 path @/.snapshots/1/snapshot
ID 267 gen 65 top level 265 path @/.snapshots/2/snapshot
ID 300 gen 13737 top level 265 path @/.snapshots/33/snapshot
ID 301 gen 13737 top level 265 path @/.snapshots/34/snapshot
ID 303 gen 13737 top level 265 path @/.snapshots/36/snapshot
ID 323 gen 13737 top level 265 path @/.snapshots/56/snapshot
ID 324 gen 13737 top level 265 path @/.snapshots/57/snapshot
ID 337 gen 15853 top level 265 path @/.snapshots/70/snapshot
ID 338 gen 15855 top level 265 path @/.snapshots/71/snapshot
ID 339 gen 15884 top level 265 path @/.snapshots/72/snapshot
ID 340 gen 15886 top level 265 path @/.snapshots/73/snapshot
ID 341 gen 15889 top level 265 path @/.snapshots/74/snapshot
ID 342 gen 15891 top level 265 path @/.snapshots/75/snapshot
ID 343 gen 15929 top level 265 path @/.snapshots/76/snapshot
ID 344 gen 15931 top level 265 path @/.snapshots/77/snapshot
ID 345 gen 16281 top level 265 path @/.snapshots/78/snapshot
ID 346 gen 16287 top level 265 path @/.snapshots/79/snapshot
ID 347 gen 16291 top level 265 path @/.snapshots/80/snapshot
ID 348 gen 16326 top level 265 path @/.snapshots/81/snapshot
I appreciate your help! I probably won’t have time to work on it again really until tomorrow, but I feel like I’m close.
Sorry I’m a little unfamiliar with how Opensuse does things so that wasnt as useful as i was hoping lol. Did you have the HDD and SSD connected at the same time when you booted? If you did then you’ll want to disconnect the HDD first.
Also when you get to the grub boot menu if you press e it will show you the config for the selected boot option, can you post a screenshot of that? You may also be able to tell if the root UUID listed there matches the one you expect from fstab. You can also remove splash=silent and quiet from the line beginning with linux and that may give you an actual error message, although it’s possible the boot process is failing before it even gets to that point. If you post the outputs here I can take a look
Edit: looking at your screenshot again I may have misunderstood what was happening, is it failing to even load the boot menu? Also do you know if it’s booting from BIOS or UEFI?
Edit 2: I’ve thought about this some more and its looking like it might be a grub error rather than anything to do with subvolumes. There’s a few things which are probably worth checking before going any further. First boot from your USB with both drives connected and run sudo blkid, assuming your SSD is /dev/sda and your HDD is /dev/sdb, do the UUIDs for the partitions on /dev/sda and /dev/sdb match?
Again assuming /dev/sda is the new SSD run sudo mount /dev/sda2 /mnt and sudo mount /dev/sda1 /mnt/boot/efi then check if the following 2 files exist: /mnt/boot/grub2/grub.cfg /mnt/boot/efi/EFI/opensuse/grub.cfg
Most recently I have used gparted to resize the root partition of my HDD (/dev/sda2) to be only a little larger than the amount of data I actually had on it. Taking it from ~7 TB to 1tb, mostly so that I wouldn’t have to copy “empty” space and also so that the partition would actually fit on my 4tb SSD (/dev/nvme0n1p2) Then I created 3 partitions on my SSD that matched the file structures on the HDD (fat=nvme0n1p1, btrfs=nvme0n1p2, linux-swap=nvme0n1p3).
I then booted from a USB with clonezilla live on it and proceeded to clone partition to partition sda1>nvme0n1p1, sda2>nvme0n1p2, sda3>nvme0n1p3. The only way I could perform the clones without errors was to run in expert mode, selecting -icds (disables check for drive size), -k (can’t remember exactly what this one did, something about not copying partition header or title?) after cloning all partitions I unhooked the HDD inside the case and tried to boot. Hit the same grub screen and hitting
e
returnederror: ../../grub-core/script/function.c119:can't find command 'e'.
I think it’s booting from UEFI? But I’m not sure how to actually tell. I will check for those grub configs in the morning though. Your help is greatly appreciated!
Well that sounds promising! In that case I suspect it is just that the new partitions have different UUIDs so you probably just need to fix the fstab and regenerate the grub.cfg. Definitely check the UUIDs with sudo blkid and let me know if they are different. Also its probably worth checking the default Btrfs subvolume hasn’t changed. If you mount both drives and run sudo btrfs subvolume get-default /mountpath for both of them and check that the outputs match. If they don’t paste both outputs here and we should be able to fix it.
You are almost certainly booting UEFI as your system looks to be quite new, probably the easiest way to check is to look at your fstab, on Opensuse I believe there should be a volume mounted to /boot/efi if you’re UEFI booting.
Also just to help with the next part could you let me know which distro you’re using to boot from USB? From one of your other comments I think its Mint isn’t it?
sudo blkid shows all UUIDs are the same as the partitions they are cloned from. I’m unable to mount /dev/nvme0n1p2 (SSD root partition) and it gives a “bad superblock” error. A little bit of googling led me to attempt to use the command
sudo btrfs rescue super-recover -v /dev/nvme0n1p2
but it told me “all supers are valid, no need to recover” I then runsudo dmesg
and seeBTRFS error (device nvme0n1pe): bad tree block start, mirror 1 want 2521222217728 have 0
BTRFS error (device nvme0n1pe): bad tree block start, mirror 2 want 2521222217728 have 0
BTRFS error (device nvme0n1pe): failed to read chunk root
BTRFS error (device nvme0n1pe): open_ctree failed
I think you’re right I am 99% confident I have seen the /boot/efi directory on my system before in the past.
I am using Mint as my live USB image. But now I’m thinking it might have been wiser to use an opensuse tumbleweed live image since id reckon it would be better equipped to handle btrfs.
I think I might need to clone the drive again to fix the superblock issue but I don’t know if I want to do it for what would be the 4th or 5th time now. I might just bite the bullet and fresh install to SSD again and copy my /home over and set everything up again. It will be a pain but not as big as this is becoming lol
I am very appreciative of your time though! And this experience has certainly taught me more about Linux and gave me some familiarity with new commands. So thank you again!
Yeah sorry I’ve not come across that error before so I have no idea how to fix it without copying the partitions again. I don’t think its anything to do with you not using an Opensuse image, other distros should be just as capable of handling Btrfs. I understand if you’ve had enough by now and would rather just do a fresh install! However if you would still like to try cloning it I’ve tested and it should be possible using gparted (assuming you can shrink the existing partition small enough to begin with). Small disclaimer, its possible to lose data if shrinking the partition goes wrong so don’t do this if you don’t have an existing backup or you’re not comfortable potentially losing the data!
First boot into the live USB with your old HDD connected. use gparted to shrink the main root partition and apply the changes. Just pick a size thats below the space available on the new drive but a bit bigger than the minimum size you can shrink it to, you can resize it properly once its copied over. Then reboot and check that the HDD is still bootable. Then boot back into the live USB with both drives connected, and delete all the existing partitions off of the new SSD and apply the changes. Open the terminal and run lsblk to check if the swap partition is mounted, you’ll probably see /dev/sda3 is listed as swap, if it is run sudo swapoff /dev/sda3 otherwise it won’t let you copy it.
You should then be able to use gparted to copy/paste the partitions between the 2 disks. When you copy the swap partition make sure it goes at the end of the disk so you can grow the main partition afterwards. For some reason when testing in a VM I also found I had to increase the size of the swap partition by 1MiB or the copy process kept failing. Apply the changes, then grow the main partition to fill the remaining empty space and apply the changes once again. After that you should be able to reboot and disconnect the HDD and you should have a usable system! If you want to use the existing HDD as a data drive I would just delete all the partitions after plugging it in and create a new one, that will ensure it has a new UUID. However I would wait a couple of days to make sure you’re happy everything cloned properly!
Followed your instructions and I got really excited when the SSD actually showed the screen with the boot options but then it hangs trying to boot for a long time and hits me with this screen:
Spoiler
This feels like the closest I’ve gotten so far!
That’s strange, could you run sudo blkid again and check the UUIDs still match between the two drives? Also you should now be able to press e on the grub boot menu, can you scroll to the bottom and then send a screenshot of that as well?
Edit: and do the UUIDs of either disk match what you see in that error message?