this is #debianan IRC-Channel at freenode
(freenode IRC service closed
2021-06-01)
0[00:00:29] <ndroftheline> well, sorry not starting installer,
but rather progressing to the point of "Partition disks"
1[00:00:48] <ndroftheline> so it doesn't appear the
messed up GPT happens at bios/volume creation...right?
2[00:00:52] <jhutchins> ndroftheline: Have you read the
install guide? It might have some info on raid.
3[00:01:20] <ndroftheline> yes, fairly extensively. the
information on sataraid is quite limited, and severely outdated
which others hree have pointed out
4[00:01:34] <ndroftheline> i mean, extensively? no,
that's the wrong word. i've read specific bits of it
carefully
5[00:01:56] <ndroftheline> this is the most relevant page i
found
7[00:02:47] <jhutchins> ndroftheline: Some people just give up
and boot off of a separate ssd or even an sd card.
8[00:03:20] <ndroftheline> most* people, probably - and
i'm nearing my exhaustion point too :)
9[00:04:12] <ndroftheline> these are dedicated boot drives,
fwiw; these will be boot, whether i get the sataraid to work so the
debian box is vaguely like the others or i go with something
different.
25[00:18:42] *** Quits: earendel (uid498179@replaced-ip) (Quit: Connection closed for inactivity)
26[00:20:25] *** Quits: digitalD (~dp@replaced-ip) (Quit: My MacBook has gone to sleep. ZZZzzz…)
27[00:21:00] <tomreyn> From the log, I see an LSI CORP SAS2X28
managing twelve ST12000NM0027 HDDs. The mpt2sas_cm0 kernel module is
used for the SAS2X28, but the raw disks are exposed to the OS.
md(adm/cfg), during its initialization, detects two active md
devices (md127 and md126), stops them, detects an unclean RAID6
setup on md126 with a capacity of ~885 GB and initializes a resync.
196[04:31:36] <sappheiros> i've heard the answer is yes
because most viruses target win/mac as main OS
197[04:32:32] <sney> windows's default firewall is pretty
locked down these days, microsoft learned that one the hard way. but
yes, linux tends to be safer at default, because there are fewer
bundled network services trying to do things that you didn't
intentionally set up.
198[04:32:38] <sappheiros> sorry, i'm just procrastinating
something, not asking questions that need answers i suppose;
i'll be quiet
199[04:32:53] <sappheiros> thanks. interesting.
200[04:32:57] *** Quits: ax562 (~ax562@replaced-ip) (Remote host closed the connection)
201[04:33:00] <sney> regardless, if this is a personal computer
on a private network you can safely not use a firewall at all. your
nat router is already handling that.
202[04:33:15] <sappheiros> oh, and yeah, i had to grant access
to multiple software -- or every software -- i've installed on
win10 recently. was mildly annoyed.
204[04:34:09] <sney> I've gone down a network
troubleshooting rabbit hole a couple of times before remembering
that windows 7 and up firewalls block ICMP ping by default.
>_>
231[05:09:38] <sappheiros> thanks ... starting tomorrow i will
search Internet more before asking
232[05:09:49] <sney> tip: when asking about something low level
or basic, google it along with 'tldp', or look it up on
the tldp.org site. lots of us learned a lot of what we know from
that site, and a lot of it is still accurate
233[05:13:02] <sappheiros> wooooooooooo thanks
234[05:13:06] * sappheiros wonders if tldp is related to RTFM
235[05:13:34] <sney> haha, I suspect it's a little younger.
236[05:13:41] *** Quits: sappheiros (~sappheiro@replaced-ip) (Quit: must....break....new computer fascination and ... do
something else ...)
300[07:07:20] <johnjay> yet when i installed autoconf on a fresh
buster install it also installs automake despite it not being a
dependency. why is this?
384[09:49:45] <jelly> sometimes the default /etc/sysctl.conf
changes over release upgrades, dunno if that's the case for
9->10 but when that's the case the package asks you what to
do
386[09:51:58] <jelly> the easiest way to avoid that question is
to isolate all the custom settings in a separate file in
/etc/sysctl.d/ and let the distro put the new default file in place
511[13:06:14] <oxek> if the service only has some proprietary
client, or even a protocol, then no guarantees can be made about
whether it will work in debian
523[13:22:10] <wuffi600> On bootup my eth0 automatically gets IP
via dhcp. where can i find the config for eth0?
(/etc/network/interfaces.d ist empty.)
524[13:23:43] <phogg> wuffi600: anything in
/etc/network/interfaces itself?
525[13:24:05] <wuffi600> phogg: there is only
"source-directory /etc/network/interfaces.d"
526[13:25:56] <wuffi600> phogg: could it be that dnsmasq-service
does an unwanted if-up?
557[14:09:31] <wuffi600> phogg: systemctl shows
"NetworkManager.service not-found inactive dead
NetworkManager.service". It there another idea what could be
doing ifup eth0 on bootup
562[14:13:13] <wuffi600> jason234: my system sets up eth0 on
bootup with running dhcp-client binding an ip to it. I#would like to
know why. network-manager is not running and /etc/network/interfaces
ist empty.
569[14:18:44] <wuffi600> jason234: i like this manual
configuration. the system i am currently wokring on does not use
this manual configuration. on bootup eth0 is set ifup and
dhcp-client is startet for it. How can i find out what service is
issuing that. Normaly i would expect eth0 to be down and
unconfigured if there is no network-manager installed and
/etc/network/interfaces is empty. Could a runnning dnymasq-service
be
570[14:18:50] <wuffi600> responsible for ifup an interface?
571[14:19:21] <jason234> either you follow to use the systemd
junk or you use the old interfaces classic method
580[14:24:08] <jason234> the idea is to have a very big memory
usage consummer that does all for the admin. even reboot or fsck
does no longer work fine. people use devuan nowadays if possible.
581[14:24:25] <wuffi600> i found
"ExecStart=/usr/lib/dhcpcd5/dhcpcd -q -w" in
"/etc/systemd/system/dhcpcd.service.d/wait.conf", ok...
but where is the automated "ifconfig etho up ..."
statement hidden?
650[15:43:25] <oxek> judd is having issues me thinks
651[15:44:06] <oxek> Inepu: keep in mind that stretch is really
old by now and you should have an upgrade plan in place
652[15:45:20] <jelly> !stretch-lts
653[15:45:20] <dpkg> Security support for Debian 9
"stretch" from the Debian Security Team ended July 6 2020.
The <LTS> team will continue to provide limited security
support for some architectures and a subset of packages until June
30 2022 (total 5 year life). See
replaced-url
654[15:45:59] <jelly> stretch has some support ^ but not all
packages are covered with LTS
716[17:22:02] <jhutchins> Boy, that's annoying. I reported
a bug on buster, response is to try testing and/or unstable and see
if it's still present. Hey, that's _your_ job, I'm
running nstable, I'm not a developer or packager.
758[18:03:00] <john_rambo> sney: Can you please which command to
use ? I am just an average home user who is really nervous about
this. Coz I had the idea that there are no backdoor/malware for
Linux.
759[18:03:06] *** Quits: dvs (~Herbert@replaced-ip) (Remote host closed the connection)
781[18:17:40] <jhutchins> shtrb: I would expect that if a
backport fixed the problem, there would be an existing bugreport
that was closed by the backport, but there isn't.
782[18:19:51] <jhutchins> shtrb: All of my storage is on a
server, accessed by cifs, and calibre on buster won't open the
library. Same version of calibre on stretch does just fine, so I
suspect the culprit is in the newer python on buster.
783[18:19:52] <jhutchins> shtrb: You can only get an upstream
fix if you're willing to run the upstream code instead of the
distribution's packages.
784[18:20:09] <jhutchins> shtrb: The preferred method is for the
packagers to report the problem upstream.
785[18:20:34] <shtrb> jhutchins, I know it's a long shot,
but can you see if the problem happen locally only (without cifs) ?
794[18:26:25] <sney> john_rambo: the page I linked says it
creates systemd-agent.service if installed as root, or a
gnomehelper.desktop file if installed as a normal user, so you can
look to see if those files exist. I checked my system, and neither
file was found:
replaced-url
795[18:27:17] <shtrb> What type of maleware wouldn't
replace systemctl itself :D
796[18:27:28] <sney> according to the post, not this one :P
797[18:28:06] <john_rambo> sney: Thanks a lot
798[18:28:23] <sney> john_rambo: as for your idea that there are
"no backdoor/malware for Linux", that is naive, linux may
usually be more secure by default than other OSes but you still need
to be intelligent and protect yourself on networks. =
800[18:29:36] <john_rambo> sney: I check for updates everyday
& install them if available ||| I have enabled ufw ||| I use
firejail for all network facing apps
801[18:29:54] <john_rambo> sney: Frankly I dont know what else
to do
802[18:30:08] <sney> and you hope ufw and firejail will
magically protect you, while asking strangers on irc "which
command to use" to check if files exist on your system?
803[18:30:09] <john_rambo> sney: I am using DNS over TLS
805[18:30:54] <shtrb> sney, I don't think we should suggest
ufw as a default secure approach (it have default on rules)
806[18:31:11] <sney> try learning how anything works, all of
this stuff is just tools, if you treat it as magic then you have no
way to tell if you are safe. start reading fundamentals on tldp.org
or something.
807[18:31:41] <sney> shtrb: I invite you to tell me where I
suggested ufw as default
808[18:32:03] <john_rambo> shtrb: the default rule for ufw is
>>> deny all incoming & allow all outgoing
826[19:20:01] <ndroftheline> anybody aware of a way i can check
the gpt tables in the debian installer before rebooting? i suspect
the installer is making an error which results in a quasi-broken
system. normally i'd run gdisk and try to write the discovered
gpt, seeing if there was an error, but gdisk doesn't exist
827[19:20:29] <sney> it might have parted
828[19:21:02] <ndroftheline> good thought but no gparted either.
unless there's a path problem?
832[19:21:55] <ndroftheline> no fdisk no partprobe
833[19:22:33] <ndroftheline> no parted
834[19:22:41] <ndroftheline> and yeah you're right ofc not
gparted lol
835[19:23:08] <sney> you could probably chroot into /target and
have more options there, since the install is finished it'll
have a pretty complete environment
836[19:23:33] <ndroftheline> chroot into target
837[19:25:18] <ndroftheline> do i need to do all these?
replaced-url
838[19:25:34] <sney> the debian installer mounts your root
volume under /target and uses debootstrap to install debian there.
so if you run 'chroot /target' in the installer, you will
essentially be in your debian installation without rebooting.
840[19:26:09] <ndroftheline> awesome, thanks - i'm vaguely
familiar with chroot-ing as i've done it before but the details
escape me; in this case it's just chroot /target ? that's
great
841[19:26:25] <sney> no but I would do this ' mount --bind
/dev /target/dev' first in order to have your /dev nodes
available inside the chroot.
845[19:27:59] <ndroftheline> aw gdisk comand not found
846[19:28:02] <ndroftheline> can i apt in here?
847[19:28:13] <ndroftheline> guess so
848[19:28:53] <ndroftheline> aw i can't lsblk; "failed
to access sysfs directory: /sys/dev/block: no such file or
directory"
849[19:29:10] <ndroftheline> maybe ought to do all the steps
there? for name in proc sys dev do mount --bind blah?
850[19:29:46] <sney> that wiki page is weird, and not updated
since 2018, you can just 'mount -t proc none /target/proc ;
mount -t sysfs none /target/sys' from the installer
851[19:29:50] <sney> logout to get back there, then chroot again
852[19:30:42] <ndroftheline> cool that worked, thanks.
854[19:32:15] *** Quits: niko (~niko@replaced-ip) (Ping timeout: 619 seconds)
855[19:33:08] <ndroftheline> so yesterday i had done this exact
same installation procedure, and when i rebooted ended up with
messages in dmesg saying the gpt was screwed up; i know i can use
gdisk /dev/blah and then attempt to write the discovered partition
table to get gdisk to warn me about the problem (based on
yesterdays'e xperience)
856[19:33:39] <ndroftheline> is there any other way you know to
check the health of the gpt now sney? or is the
attempt-to-write-gpt-with-gdisk an ok way?
857[19:33:53] <jelly> ndroftheline, are you looking at gpt on
member disks, or gpt on the array device?
873[19:37:22] <jelly> why did this happen? GPT keeps two copies
-- one at the beginning of the device, one at the end. Fakeraid uses
some space at the end AS WELL, and the resulting raid array is
slightly smaller than each member
874[19:37:27] <Aparajito> can only trust old dogs
875[19:37:53] <jelly> GPT got correctly written to the raid
array. It will be incorrectly read from raw members
876[19:38:44] <jelly> (because each member will have the second
copy not at the end of disk, but at the end of exposed raid array
size
877[19:38:48] <ndroftheline> ok so one thing i noted is that in
the BIOS, the fakeraid volume (which is composed of two 950gb
drives) is 884gb in size. the debian installer reports the volume as
950gb (in the disk selection menu). is that a problem?
878[19:39:17] <jelly> various linux tools take these things into
account if you use the more typical md 1.2 format
882[19:40:25] <sney> one of those may be reporting GiB instead
of GB, but even if not it won't be a problem unless you fill
the volume up to capacity, which is bad in any case
883[19:40:27] <jelly> ndroftheline, 950GB is about 884GiB
884[19:41:29] <ndroftheline> yeh ok. the bios menus are terrible
of course because on the same screen it reports the member disk
sizes as 950gb and the raid volume as 884gb, with no differentiating
symbols indicating GB vs GiB or anything similar :weary:
885[19:41:32] <jelly> GB = 1,000,000,000 B; GiB = 1024^3 B =
1,073,741,824 B
886[19:41:47] <ndroftheline> anyway - that theory is out.
887[19:42:01] <sney> ndroftheline: just add to your list of why
you should consider migrating these systems to something other than
fakeraid, in the future :)
888[19:42:02] <jelly> one would expect consistency within one
same UI
890[19:42:52] <ndroftheline> yesterday when i came back that
same random stranger offered some other ice cream: help with
understanding what was going on in the initrd environment i landed
in after we made the changes to the fstab yesterday, and i landed in
a grub prompt
891[19:43:14] <jelly> the thing with fakeraid is that it's
well, fake, so members are exposed to the OS as-is
898[19:46:20] <ndroftheline> so recall the member partitions
have the same UUIDs as the fakeraid partitions on reboot so debian
just picks one of the member partitions to mount and i end up
booting off a member drive?
901[19:47:12] <ndroftheline> so we adjusted the fstab to ignore
the root UUID and instead pointed it directly at the md device,
/dev/md/volume0 (which i have now re-created as /dev/md/rste_volume0
btw)
902[19:47:45] <ndroftheline> well after that change, on reboot i
got a grub prompt. grub only found one "hd" entry with a
root filesystem so i manually booted to that, passing
root=/dev/md/volume0
903[19:47:53] <ndroftheline> but unfortunately ended up in an
initramfs environment instead
904[19:48:14] *** Quits: Deyaa (uid190709@replaced-ip) (Quit: Connection closed for inactivity)
906[19:49:21] <ndroftheline> i wasn't able to assemble the
array in initramfs
907[19:49:56] <ndroftheline> which suggests maybe the initramfs
enviornment lacks the full tooling required to assemble the
fakeraid? maybe? so i got an lsmod and was to compare it to an
environment where that worked
909[19:53:08] <ndroftheline> both initramfs and installer lsmod
list "md_mod 167936" so that doesn't seem likely the
problem
910[19:54:24] <sney> I saw some of the scrollback yesterday but
not all of it, so apologies if this is a rerun, but:
911[19:55:23] <sney> mdraid and dmraid are different things.
fakeraid is usually dmraid, with the dm_* kernel modules, and last I
checked, /dev/dm-* device nodes. mdraid is linux kernel software
raid. if mdraid is trying to assemble arrays, they are most likely
*not* the ones you configured with your bios.
912[19:56:19] <sney> mdraid tries to be clever and autodetect
arrays, as well, even if you never set one up. if your bios raid
uses similar logic, md might be "detecting" something it
shouldn't.
913[19:56:26] <ndroftheline> that's not a rerun, thanks but
yeh mdraid is actually intel's recommended mechanism to manage
this type of array. dmraid is older and less capable. there's a
whitepaper on it
914[19:56:59] <ndroftheline> from like 2014, when dmraid was
considered obsolete and mdadm the way forward :P
915[19:57:31] <sney> ha, shows how long it's been since I
bothered with fakeraid on linux
916[19:57:45] <ndroftheline> yeah...yeah.
917[19:58:49] <sney> still, your live lsmod shows dm_ modules
loaded. is that one of the working environments?
918[19:59:21] <ndroftheline> both the live and installer
enviornments assemble the arrays correctly
919[19:59:34] <ndroftheline> so yeah
920[20:00:45] <ndroftheline> i am very near to throwing in the
towel and going with btrfs or a regular mdadm. i had a vision of
submitting a useful bug report once i found out why this is broken,
but i'm losing hope
921[20:01:39] <sney> ok, so regardless of administration tools,
the systems with dm_mod loaded are the ones that assemble your array
correctly. put 'dm_mod' in /etc/initramfs-tools/modules,
rebuild the initramfs, and see what happens?
924[20:02:58] <ndroftheline> first, do you know how i can force
the boot process to stop in the initramfs environment?
925[20:03:50] <ndroftheline> because what's been happening
is, the system will happily boot directly off a member drive and i
suspect that does at least some small write that makes the fakeraid
volume think it needs to resilver
926[20:04:03] <sney> yes, use a break= parameter as described in
the second section here
replaced-url
927[20:04:03] *** Quits: wintersky (uid453465@replaced-ip) (Quit: Connection closed for inactivity)
928[20:04:49] <ndroftheline> see, i need to learn to search
better; i spent some time yesterday trying to find this page. :(
929[20:05:23] <sney> the debian wiki built-in search is
terrible, anything I don't have bookmarked I do in google with
site:wiki.debian.org
932[20:07:17] <ndroftheline> is there any way to find out what
module(s) an existing device/filesystem rely on? since i'm
chrooted into the freshly-installed, not-yet-booted system?
933[20:07:43] <ndroftheline> in case it's something other
than dm_*
935[20:09:10] <ndroftheline> also in the initramfs lsmod, next
to the "raid1" module and its size, the next field reads
"0" vs "1" in both installer and live
enviornment lsmods. does the raid1 module need activation?
938[20:10:40] <sney> yeah, 'ls -l /dev/whatever' on
your root device, then compare the major and minor numbers to
/lib/modules/kernel-version-here/modules.alias
939[20:10:59] <ndroftheline> holy crap that's amazing, ok
940[20:12:00] <ndroftheline> did you mean lsmod instead of ls?
ls -l just shows me the ls output of that file
941[20:12:13] <ndroftheline> no, not lsmod.
942[20:12:39] <sney> I meant ls. for instance, my disk is
'brw-rw---- 1 root disk 8, 1 May 7 13:21 /dev/sda1' - b
means block, 8 is the major number, 1 is the minor.
943[20:12:59] <sney> so if this wasn't just a regular sata
disk, I might see a modules.alias entry for block-major-8-1
944[20:13:39] <ndroftheline> thanks, ok 9,126
945[20:14:50] <sney> hm, nothing, is there a higher level
control interface in the /dev root with a similar name?
946[20:15:24] <ndroftheline> there's also
/dev/md/rste_volume0 (rste_volume0 is the label i gave this volume
in the bios)
947[20:16:01] <ndroftheline> oh that's just a link back to
../md126
948[20:16:17] <sney> ah no, here we go, md has alias:
block-major-9-*
952[20:19:11] *** Quits: debsan (~debsan@replaced-ip) (Remote host closed the connection)
953[20:19:49] <ndroftheline> i'm struggling to search
modules.alias for the major/minor numbers - and doing some reading
to understand what major/minor numbers are.
954[20:19:53] <sney> so the disk is md, I wonder if you need to
set something in mdadm.conf to keep the autobuild from screwing
things up
955[20:20:36] <ndroftheline> well - does this help? ls -l
/dev/md :
replaced-url
960[20:23:36] <sney> ndroftheline: you said you have some other
distros working normally on this hardware, right? do any of them
have a mdadm.conf with something other than defaults?
962[20:25:55] <ndroftheline> no other linux machine running this
hardware. the sister unit to this is currently running windows. i
can try to install centos on this machine and see how it deals
965[20:27:24] <ndroftheline> on this hardware, i've never
established that it works, no. but tbh i am so close to throwing in
the towel and doing this in a more normal fashion - in my mind
i'd decided, this is the last attempt i'll make to install
debian on the fakeraid. if ic an't get this going i'll
just use btrfs or regular mdadm.
976[20:31:58] <sney> we covered that, it's 9,126 and md has
block-major-9-*
977[20:33:09] <shtrb> anyone about my systemd question ?
978[20:33:10] <ndroftheline> ah, as in i don't even need to
check because it's confirmed the md module should handle all
block-major-9-* devices
979[20:33:21] <sney> exactly
980[20:33:41] <somiaj> shtrb: what was your question?
981[20:34:03] <shtrb> What systemd "event" or
"target" ppp process need to achieve so it could start ? I
went up to wait period but should be some easier way
replaced-url
982[20:34:19] <jmcnaught> I was also wondering what the BIOS
fakeraid contributes if you need to use md anyways. It apparently
uses a different metadata version (software RAID is 1.2, with Intel
RST it's imsm, so with mdadm commands you need to use
-e/--metadata=imsm and presumably it provides some hardware
acceleration.
983[20:34:22] <shtrb> lol it's already timeouted out :D
984[20:34:29] <sney> anyway this macrium reflect looks like it
depends on NTFS internals to do anything interesting. but if there
is a linux version, I doubt it can tell the difference between bios
md and regular unmolested md. unless it's doing these backups
offline?
992[20:36:39] <ndroftheline> yes, the macrium image process is
done offline. boot macrium (it's a PE envrionment), has intel
fakeraid drivers, takes image.
993[20:37:08] <ndroftheline> the contribution i think jmcnaught
is that windows can understand it as well as linux
994[20:37:19] <sney> ah. that's... hmm. like knocking down
a wall to bring in your groceries?
995[20:37:33] <shtrb> sney, is that for me ?
996[20:37:33] <somiaj> shtrb: so you are just trying to get the
dependecies working so this runs at the right time?
1001[20:39:07] <sney> ndroftheline: maybe you can convince $boss
that lvm or btrfs snapshots are better than this other thing,
because you don't have to turn off the computer to save them
1002[20:39:08] <somiaj> shtrb: have you tried
network-online.target
1004[20:39:41] <shtrb> yes, same result (not working)
1005[20:40:07] <sney> shtrb: can you paste the log output of it
not working?
1006[20:40:51] <ndroftheline> yeah, $boss is cool with snapshots
and understands that - and there is an exception process in place -
it's just that our remote hands at this site knows how to
restore a macrium reflect image already, so if things bork it's
quick-ish to get back to a working point. regardless, at this point
i'm happy to invoke the exception process and do this a
different way - it will continue to bug me why it doesn't work
because it looks like it should work
1008[20:41:29] <shtrb> sney, That part of the problem ,
"it" the application (Dell Sonic Wall) fails without any
meaningful output. I only know it would try to start ppp after a few
internal checks
1009[20:41:34] <ndroftheline> i mean, from all accounts, i should
be able to boot this newly installed system into initrdfs and
assemble the array...right? and the boot process continues on
hapily.
1010[20:42:06] <sney> ndroftheline: have you checked for firmware
updates for this system? sometimes non-windows compatibility is not
improved until later.
1013[20:43:33] <sney> shtrb: does it work if you run the
sslvpn.sh script directly?
1014[20:43:48] <shtrb> yes, and it work ok with the approach I
did with the restart
1015[20:43:55] <somiaj> shtrb: ifupdwon-wait-online.service -- is
that service enabled?
1016[20:44:02] <somiaj> shtrb: actually wait, what do you use to
configure your network?
1017[20:44:24] <ndroftheline> good thought sney but yeah i did
actually update the firmware/bios/bmc on this to latest when the
install failed to boot the first time
1022[20:44:55] <shtrb> somiaj, thanks will look into it now
1023[20:44:58] <somiaj> shtrb: okay, so if using network manager,
you needto NetworkManager-wait-online.service -- is that enabled?
1024[20:45:14] <ndroftheline> (latest being ~2018 iirc, heh.
it's an old machine.)
1025[20:45:29] <somiaj> note this also ways it will slow down
your boot time (just so you are aware) since it will explicity have
to wait for the network when doing some things
1026[20:45:56] <shtrb> somiaj, yes , but I will not add that and
see
1028[20:48:23] <s3a> Hello to everyone who's reading this.
:) Does anyone know how to use the GNU / Linux command-line
interface to change passwords of odt (and ods, etc.) files (assuming
that it is possible to do)? I wasn't able to get it done with
the libreoffice, lowriter and unoconv CLI utilities.
1046[21:06:57] <ndroftheline> hm, i can't see in the
installer where to make a btrfs mirror
1047[21:07:34] <somiaj> ndroftheline: I think you have to load an
additional udeb for that.
1048[21:07:46] <ndroftheline> oh, ok.
1049[21:07:50] * ndroftheline wonders what a udeb is
1050[21:08:11] <somiaj> ndroftheline: a special deb made for the
installer, basically a stripped down .deb
1051[21:08:28] <somiaj> ndroftheline: there should be an
'load additional' componenets or such option where you can
select additional things to load intot he installer
1052[21:08:33] *** Quits: AF04FB9290474265 (~Throwaway@replaced-ip) (Remote host closed the connection)
1053[21:08:47] <ndroftheline> i started the installer in normal
text mode, do i need advanced installer for that?
1054[21:09:12] <somiaj> Unsure there, I always use expert mode,
so don't know if such things are stripped out of the normal
mode or not
1055[21:09:17] <sney> you can also install debian to a single
member disk, and add the second disk and convert it to a mirror
afterwards, with 2 commands
replaced-url
1057[21:09:52] <ndroftheline> wow that sounds much better. this
old box takes ages to boot thanks sney
1058[21:10:03] *** Quits: AF04FB9290474265 (~Throwaway@replaced-ip) (Remote host closed the connection)
1059[21:10:46] <sney> np
1060[21:10:52] <ndroftheline> hm, so guided partitioning on a
single disk in normal install mode set the partition to type ext4.
do i just change that to type btrfs?
1061[21:11:11] <sney> yep
1062[21:11:34] <ndroftheline> cool, doing that. ta
1078[21:16:50] <dpkg> Ubuntu is based on Debian, but it is not
Debian. Only Debian is supported on #debian. Use #ubuntu on
chat.freenode.net instead. Even if the channel happens to be less
helpful, support for distributions other than Debian is offtopic on
#debian. See also <based on debian> and <ubuntuirc>.
1079[21:17:19] <petn-randall> tehuty: That looks like Ubuntu to
me, I'd ask in their support channel. ^^^
1080[21:17:19] <tehuty> thx
1081[21:19:43] <ndroftheline> oh does that conversion need to
happen in a live environment sney
1082[21:20:15] <sney> ndroftheline: nope, mirroring can be done
live
1086[21:21:50] <john_rambo> I just used rkhunter to scan for
rootkits. This is the first time I am using rkhunter. The log says
"Possible rootkits: 8" but the log is so huge I cant
locate the name if the rootkits ....Someone familiar with rkhunter
please help me spot the rootkits ....replaced-url
1087[21:22:12] <ndroftheline> then btrfs device add /dev/sdo2 / ?
as in, add the second partition on the additional drive as a mirror
to root ?
1088[21:23:02] <sney> ndroftheline: honestly, not sure, you might
need to tinker a little bit. I'm basing this off the wiki page
I linked above, and the basic understanding of how CoW filesystems
behave, rather than personal experience.
1090[21:23:39] <ndroftheline> fair. yeah i'm basing my
questions off the wiki you linked to. the commands suggest the root
partitoin must be explicitly mounted to /mnt and then the commands
issued against partitions rather than drives.
1091[21:23:52] <ndroftheline> i'll tinker ta
1092[21:25:14] <sney> ndroftheline: there is also #btrfs that
might have more specific advice
1093[21:26:16] <ndroftheline> yeah! if this breaks i'll try
there. i just got to a root prompt on the newly installed system,
btrfs device add /dev/sdm / && btrfs balance start
-dconvert=raid1 -mconvert=raid1 / , and got a "Done, had to
relocate 5 out of 5 chunks" message. worked?
1102[21:35:14] <nefernesser> I have a thinkpad that I'm
trying to install debian on, but the bios is fucked and I can't
boot usbs from it so I'm installing debian on the ssd using
another computer. I'm manually partitioning (trying) the disk
because it needs GPT and EFI etc to boot. Anyway I just went to
select the boot partition and select how itll be used an there
isn't an option to select EFI System Partition, what do?
1109[21:46:21] <ndroftheline> nefernesser, are you sure you
booted the installer in uefi mode? i think you can confirm that by
checking the existence of a populated efi folder somewhere. you may
also find more options in the advnaced installer
1146[22:26:45] <ndroftheline> lol raid1 system drive on btrfs
does not feel common/well based on how the convo in #btrfs is going.
apparently ESP has to be managed externally
1147[22:27:16] <ndroftheline> manually copy partitions from
installed drive, convert the root partitoin to mirrored btrfs, sweet
1148[22:27:33] <ndroftheline> but then if your first drive fails
or is rmoved, machine won't boot.
1149[22:28:54] <ndroftheline> re: esp in a raid1, "he only
practical way to support software raid is via firmware raid. So the
firmware needs to support DDF or imsm, both of which mdadm
supports"
1165[22:38:16] <ndroftheline> ok i'll try with the other
"more common than fakeraid" raid1 setup: plain mdadm which
appears supported in the debian installe directly
1166[22:38:26] <s3a> petn-randall, I don't have a problem
when using the GUI; it's just that I have a lot of open
document files (made with LibreOffice), and I'd like to change
their passwords in bulk with a script.
1167[22:39:18] <ndroftheline> text mode normal install, wrote new
gpt to both member disks, created raid1 md device via the installer
options, chose use entire drive, picked default everything in one
partition choice, now installing.
1170[22:44:43] <ndroftheline> crap! this is the same failure
screen i got the first time i tried using plain mdadm. big red
screen, "Unable to install GRUB in dummy. Executing
'grub-install dummy' failed. This is a fatal error."
1171[22:45:13] <sney> that happens sometimes, go back to the menu
and install it on the member devices
1218[23:29:17] <tokenman> Dear Debian-Community - I have a
question regarding a Lenovo T14s with Ryzen 7 PRO 4750U - after
installing plain Debian I enabled contrib and non-free repositories
and installed missing software according to
replaced-url
1219[23:29:19] <tokenman> /dev/dri/card0: No such file or
directory - do you have an idea how to use debian on that machine?
1221[23:33:57] <sney> tokenman: some newer ryzens are not
supported properly by the 4.19 kernel, try with 5.10 and
firmware-amd-graphics from buster-backports
1222[23:34:03] <sney> !buster-backports
1223[23:34:04] <dpkg> Some packages intended for Bullseye (Debian
11) but recompiled for use with Buster (Debian 10) can be found in
the buster-backports repository. See
replaced-url
1227[23:36:14] <sney> noord: it should but might need firmware,
1228[23:36:18] <sney> !i915 firmware
1229[23:36:19] <dpkg> Some Intel UHD GPUs made after 2015 require
firmware from userspace for all features to be enabled. This
includes Skylake, Kabylake, Broxton, Cannonlake and possibly others.
Ask me about <non-free sources> and install
firmware-misc-nonfree to provide.
1230[23:37:39] *** Quits: CombatVet (~c4@replaced-ip) (Remote host closed the connection)
1232[23:38:07] <ndroftheline> hey all, ok i'm going to try
to do this properly; seems my guesses about how to use the installer
are wrong. has anybody successfully set up a mirrored system drive
in debian? is this still the best practice?
replaced-url
1233[23:39:58] <tokenman> dpkg: Thank you very much for your
helpful answer! I will try to install the software from backports -
or would you advise Debian Bullseye RC 1?
1234[23:39:58] <dpkg> tokenman: no worries
1235[23:40:01] <jhutchins> apcupsd integrates nicely with the
power manager.
1236[23:40:10] <sney> ndroftheline: LVM is handy particularly if
you want snapshots, but it's still optional. the only thing I
thought you would see is this menu,
replaced-url
1237[23:41:01] <sney> tokenman: bullseye is pretty close to
release, so that might be an easier option out of the box
1241[23:46:48] <tomreyn> ndroftheline: i'm pretty sure the
first part of this guide (until / excluding "Install the
bootloader (lilo)") should still work (only) IF you're
BIOS booting from mbr partitioned devices. with BIOS + GPT
you'd also need a bios-grub partition, with EFI booting
you'll also need an ESP
1243[23:47:40] <ndroftheline> righto, that makes sense
1244[23:48:09] <tomreyn> ndroftheline: note that in this example,
software raid is spun across partitions, not full disks. i guess
full disks also works if you still have another place to put the
boot loader.
1245[23:48:14] <ndroftheline> the esp seems to be the sticking
point; i'm getting conflicting information about whether
it's a good idea, and not having any luck with respect to
making it work anyway
1246[23:48:16] <jhutchins> Are LVM snapshots similar to VMWare
snapshots?
1253[23:50:58] <ndroftheline> there's a guy in #btrfs saying
that putting an esp on a software raid and then experiencing a
hardware failure will frequenly result in a broken system, at least
if i understood it right
1254[23:51:23] <tomreyn> ndroftheline: don't put esp on raid
in the first place.
1255[23:51:41] <jmcnaught> jhutchins: LVM is the Logical Volume
Layer, its snapshots are of block devices using device mapper.
1256[23:52:20] <tomreyn> ndroftheline: if you want the mainboard
firmware to be able to read and write data on the esp, it would need
to understand any intermediate layers
1260[23:55:31] <noord> sney: I consider switching from pretty old
ubuntu 16.x with i3wm to buster, do you recommend it?
1261[23:56:04] <sney> noord: sure.
1262[23:56:14] <noord> I have couple of hesitations, wifi and gpu
support and wayland
1263[23:57:07] <sney> anything supported by ubuntu 16 will work
in buster, almost certainly. you might need firmware for some
components, but there is an installer that includes it. and wayland
is optional.