[19:02:41] Right! As a certain Lord Flashheart would put it, LET'S DOOOOOO IT! [19:03:00] * soap here (in case sam_ goes offline) [19:03:13] 1. Roll call [19:03:19] Make it so. [19:03:21] * marecki here, unsurprisingly [19:03:21] * dilfridge here [19:03:25] * mgorny here [19:03:32] * ulm here [19:03:34] * gyakovlev here [19:03:35] * ionen here (proxy for mattst88) [19:03:56] Excellent. [19:03:58] Complete agenda for today: https://archives.gentoo.org/gentoo-project/message/d5466ee50adb843f59804a869d53b94b [19:04:25] 2. Review of proposed Infra server/network purchases [19:04:32] robbat2: The floor is yours. [19:04:36] Hi [19:04:49] * soap here [19:04:59] i'll cover the networking first, because I feel that's less to discuss there [19:05:09] relevant doc page: https://wiki.gentoo.org/wiki/Project:Infrastructure/Shopping_list#Switch [19:05:45] TL;DR: the long-time network switch that infra had in OSL, originally purchased 2011, died in late 2020, and hasn't been replaced. OSL had a loaner for us [19:05:50] but they need to loaner back [19:06:26] at the same time, that old switch also isn't suitable for future growth needs: the hypervisor hosts used crossover networking to provide high bandwidth between the pair [19:07:23] so Infra proposes buying 2 switches: 1 for OOB & 1Gbit service in the rack, as well as a high bandwidth switch for the faster systems: 3 so far, with some more coming in the next section's discussion [19:07:40] this would be under $5k USD in total [19:08:03] ++ [19:08:06] exact amount to vary depending on shipping, taxes, exact transciever options [19:08:23] any questions about that plan? [19:09:04] not from me (it makes sense to plan ahead there) [19:09:15] are you asking us to give the final approval today? [19:10:03] if the council feels they need additional information, i'm fine with tentative approval pending that [19:10:15] just curious why this vendor? not that I mind, just asking. [19:10:16] I certainly second DAC recommendation it's cheaper than fiber and way way cooler (temp wise) than 10Base-T rj45 [19:10:49] FS.com as the vendor? no support/licensing lock-in compared to almost every other vendor [19:12:27] thanks. makes sense. [19:12:29] DAC: there are some existing 10GBase-T hosts we need to support, so I was thinking of buying just a get 10GBase-T transicevers to support that, and avoiding it otherwise [19:12:38] *just a few [19:13:10] robbat2: what does DAC in that context mean? hard to google :D [19:13:24] Direct Attach Copper [19:13:31] ah [19:13:40] twinaxial copper cable for short distances [19:14:02] dilfridge: it's a copper cable passive transceiver integrated. basically instead of hot Rj45 you just get a single cable. [19:14:12] so basically a copper cable with transceiver on it, yes [19:14:12] https://www.fs.com/products/21254.html?attribute=1318&id=222508 [19:14:17] as opposed to having active optical transcivers and fiber optic cabling, or ethernet cabling [19:15:04] RJ45 transceivers are HOT! cheap ones get like 100C hot easily, [19:15:38] * dilfridge sneaks around "wanna have some hot transceivers?" [19:15:57] anyway [19:15:58] also saves money. instead of 2x transievers + rj45 cable, just single cable at like 1/3 of total price. [19:16:08] 5K sgtm [19:16:54] so how do we do this, should we do a vote approving 5k$ ? [19:17:03] marecki: as the chair, can you call for a vote on it? "no", "yes, final", "yes, tentative"; in case other council members want more info [19:17:21] i suggest the wording of the motion: "Approve up to $5k USD for Infra networking purchase" [19:17:55] I'd say yes, we vote. Unless anyone has anything else to say? For me, both the technical description and the rationale posted on the ML have been clear and sufficient. [19:18:59] ++ [19:19:14] fire it up [19:19:16] let's do it [19:19:25] That said, I wonder if we should make the motion at least somewhat bound with said technical description. Not that I expect Infra to go on a shopping spree, I just don't like rubberstamping as a matter of principle. [19:20:20] I trust that they won't buy washing machines instead :) [19:20:38] "Approve up to $5k USD for Infra networking purchase as proposed in referenced documentation" ;-) [19:20:51] They could buy half a million cheap switches for home use ;-) [19:20:53] Okay then. Motion to vote on: approve up to 5000.00 USD for Infra networking purchases as outlined in https://archives.gentoo.org/gentoo-project/message/89be5013dc4d0921e7cf927d5f3e356c [19:20:57] https://wiki.gentoo.org/index.php?title=Project:Infrastructure/Shopping_list&oldid=1048061 [19:21:21] * dilfridge yes [19:21:25] * ionen yes [19:21:27] * mgorny yes [19:21:27] * ulm yes [19:21:28] * marecki yes [19:21:28] * gyakovlev yes [19:22:06] * marecki looks towards the Swiss Alps and taps his foot [19:22:24] * soap yes [19:22:43] 7 for, 0 against, 0 abstain. Motion passes. [19:22:58] Part deux: servers. Back to you you, robbat2. [19:23:14] relevant doc: https://wiki.gentoo.org/index.php?title=Project:Infrastructure/Shopping_list&oldid=1048061#Computing [19:23:33] part of this deserves slightly more backstory [19:24:17] Gentoo, at OSL, peeked at a large number of systems before: woodpecker, dove, dipper, ganeti2, vulture, finch, 5x Supermicro atoms (brambling, etc) [19:25:00] *actually ganeti1 & ganeti2 [19:25:25] over time, when the oldest things started dying or being too slow, they were migrated into VMs on ganeti1 & ganeti2 [19:25:44] when those themselves were problems, we bought the current hypervisor pair, oriole & ovenbird [19:25:56] (and ganeti1 became dipper) [19:26:58] the two large systems remaining that haven't been virtualized now are that dipper, and jacamar, a donated server (originally from Flameeyes) [19:27:18] plus Vulture, which is much smaller and also 15 years old [19:28:11] there is not enough capacity to absorb those 3 systems which are showing signs of near-future failure into the current hypervisors [19:28:18] and there's also requests for more CI capacity [19:28:39] dipper is mastermirror, jacamari is ci [19:29:27] so Infra would like to buy NEW hypervisors, that replace the power budget of those 3 systems; to consolidate, and offer excess capacity as CI [19:29:45] the old hypervisor pair would continue to be used for CI & redundant services [19:29:56] robbat2: so are we going to have two (stronger) hypervisors still, or more than two? [19:30:01] at least 2 [19:30:14] and remaining machines physical? or are we aiming for more virtualization? [19:30:34] virtualize everything that isn't special case hardware for arches [19:30:53] e.g. muta.hppa, guppy.ia64, ??.sparc64 [19:31:02] well, wfm [19:31:17] any questions about the rationale, or high-level plan? [19:31:39] i suppose the end summary is 'we get newer [i.e. less likely to fail], possibly stronger, more efficient hardware', correct? [19:31:45] yes [19:31:56] Sounds reasonable... So what does Infra need from the Council right now? [19:32:01] what do we about the old machines? [19:32:03] jacamar lost a disk this week, which was a good fresh reminder [19:32:19] (dropping in while running around: i'm very happy about idea of more virtualisation which lets us try experiments on hardware without needing to provision a new physical box or decommission/remove secrets for developers to have access) [19:32:40] jacamar is still "good" enough, but I don't know if we have the electrical power budget to keep it [19:32:55] if we can still keep jacamar, i'll certainly try it [19:33:35] does OSL take care of utilizing them or...? [19:33:46] we have donated hardware back to OSL before yes [19:34:03] if we couldn't fit it, or didn't need it [19:34:15] i think that's mostly happened with old alt-arch hardware [19:34:17] also, sorry, one question: we're making good/returning the switch OSL donated ot us, yes? [19:34:28] returning their loaner yes [19:34:59] so, i'll say if there's no more Q's about rationale or high-level, i'll move on to the detail part [19:35:06] ++ [19:35:16] Infra has been collecting quotes and potential detailed specifications [19:35:48] thank you to bonsaikitten/patrick, slashbeast, blueknight, antarus and others who have used their vendor contacts [19:36:11] I started a comparision worksheet, that's available as HTML here: https://dev.gentoo.org/~robbat2/20220213-potential-infra-server-options/ [19:36:25] i caution that the options aren't entirely apples-to-apples [19:36:43] some vendors don't have close enough options to permit a comparision like that [19:37:10] but, I feel it's good enough to discuss [19:37:59] columns B-E, and G, are SATA/SAS based options, that are $13k-18k USD/ea [19:38:14] columns F, H-I are trying to go pure NVME [19:38:25] and come up $20k-$50k/ea [19:39:06] column F, Thinkmate's offering, and column H, Dell's NVME option are the closest apples-to-apples [19:39:38] they have slightly different RAM, and Thinkmate is still significantly cheaper than Dell [19:40:03] i think we should finish the quoting [19:40:10] but I would like to ask for tentative approval [19:40:16] for $25k USD/server [19:40:28] subject to a final approval in the next council meeting [19:40:34] the spreadsheet is net or gross? [19:40:51] initial quantity 2 this fiscal year; with an option for one more in the next fiscal, pending CPA advice [19:41:08] dilfridge: as-is from the vendors [19:41:26] right, the us sales tax insanity [19:41:27] robbat2: what's their expected lifetime? [19:41:43] infra is good at running servers for ~10 years lifespan [19:42:23] as I said, vulture is 15 years old; finch that failed last year was 14 years old [19:42:57] 25k means you tend towards ... F? [19:42:58] dipper(ganeti2) is just over a decade, and starting to be not usefully fast for the IO work [19:43:02] I feel like dells have a lot of added support/warranty cost that may not benefit gentoo. I'd personally favour non-lock-in vendor like you did with switches. [19:43:27] i'd be even for the more expensive variants then [19:43:33] i suppose our operations are often I/O bound [19:43:33] yes, Thinkmate's variant seems best to me, but I want to ask other vendors for something comparable [19:43:53] i'm trying to find a better supermicro rep [19:43:59] I have contacts with one of the vendors on the list [19:44:11] SMC tbp (not work related, personal contact) [19:44:14] so I can ask. [19:44:17] robbat2: so these two these year, will they be for hypervisor replacements or...? [19:44:33] yes, hypervisor capacity, and jacamar & dipper will be virtualized [19:45:39] yeah, as a non-expert, thinkmate's variant seems to fit [19:46:02] gyakovlev: if my other request fails, i'll check with you for the contact [19:46:18] no further questions from me [19:46:33] as a non-expert, how "expandable" are these usually? say, plug in another 8T NVMe later? [19:46:46] that IS in the larger version of the plan [19:46:53] I'll mail them now, not sure if they still work there. if they do I'll be able to provide mfg drop quote. [19:46:58] ok good [19:47:14] no more questions from me [19:47:27] 2x RAID1 NVME to start, and the smallest chassis there goes up to 6x disks [19:47:34] the largest chassis goes to 16x disks [19:47:35] suspected so [19:48:57] any other questions about the detail part? I have one more part before the motion itself [19:49:08] (last part is in my role as treasurer) [19:49:42] Go ahead [19:50:10] closing FY2021, we had $160k cash-on-hand [19:51:02] we've got approximately $170k cash-on-hand right now, with no further large expenditures due this fiscal (taxes are already paid etc) [19:51:31] still, the proposed 2x $25k = $50k is a substantial fraction of the savings [19:51:52] so i'd to know if there are financial questions [19:52:01] we would be using the IRS's accelerated depreciation on these servers [19:52:20] to offset taxes due on income in the next 3 years [19:53:38] Do these affect our potential umbrella move somehow? [19:54:18] neutral impact, as long as we have enough for all legal fees needed by the move [19:54:28] I.e. is it better to buy first, or e.g. move money to the umbrella and buy there? [19:55:08] if we moved to umbrella first, there is the potential of non-profit discounts, at the human cost of harder purchasing process [19:55:20] heh, suspected that. [19:55:26] but we need the hardware sooner than we could move to the umbrella [19:56:02] (there's no way we could complete the umbrella move this fiscal, and we want the tax advantage this fiscal) [19:56:13] good point. [19:56:26] rn is a good time to buy HW. cost2perf ratio is insane with this upgrade. and it's quite future-proof. [19:57:04] my very rough estimate of net tax savings as $3-5k/year [19:57:47] ok, so if there are no more questions on any parts [19:57:49] that motion [19:58:27] "tentative approval of infra's proposed hypervisor purchase, 2x servers at $25k USD/ea; per https://wiki.gentoo.org/wiki/Project:Infrastructure/Shopping_list#Computing" [19:58:53] and 2nd motion, "option for 1x more server in early FY2023" [19:59:35] The timeline for details for the first one is next month. And the second? [20:00:08] it would be the same specifications, just purchased after 2022/07/01 [20:00:46] Let me rephrase: when will Infra want NON-tentative approval for the third machine? [20:01:44] i'm not 100% certain we need the capacity, but the CPA suggested we do it for the tax perspective [20:01:58] so while you could approve it, we might not actually spend it, and just wait [20:02:00] Thanks for clarifying the date by the way, I left the US in early 2010 so I have got NO idea any more when fiscal years over there start. [20:02:02] capacity tends to get filled [20:02:21] Foundation's tax year runs 07/01 -> 06/30 [20:02:45] that's 1/july - 30/june in decent spelling :D [20:03:00] Right. [20:03:02] approval not later than end-of-May [20:03:19] next month should be doable imho [20:03:42] we need to order AND take possesion of the hardware in this tax year [20:03:46] for the first batch [20:04:23] Dell's lead time is 6-8 weeks right now, I expect other vendors are higher [20:04:24] I mean, realistically speaking, given our typical business, what else large costs do we have that we'd need to save for? [20:04:50] that may be problematic with EPYC availability. some ETAs on EPYCs are AUG 2022, so it depends on exact spec. [20:05:05] yep, spec variation to fit timeline is likely [20:05:20] Motion: tentative approval of Infra's proposed hypervisor purchase of 2 servers at 25,000.00 USD each, as per https://wiki.gentoo.org/wiki/Project:Infrastructure/Shopping_list#Computing , with full details to be provided by and binding approval scheduled for the March 2022 meeting of the Council. The purchase is expected to take place by 2022-06-30. [20:05:26] NVME is also about to crunch due to WD's contamination issue [20:06:39] * dilfridge yes [20:06:39] * gyakovlev yes [20:06:45] * ionen yes [20:06:48] * mgorny yes [20:06:51] * ulm yes [20:06:54] * soap yes [20:07:09] * marecki yes [20:07:23] Motion passes unanimously. [20:08:08] thanks; if there are specific requests for the final details, please let me know on the mailing list, or at least in #gentoo-infra [20:09:08] i have to go now for the kids, so i'll leave you to the EAPI-9 discussion pieces, closing with please don't repackage distfiles [20:09:42] (i did include an idea about out-of-tree Manifests in my last post to -project) [20:09:49] bye [20:10:18] Motion: tentative approval of Infra's proposed hypervisor purchase of 1 additional server with the similar specifications and the same cost estimate, to be purchased between 2022-07-01 and 2023-06-30. Infra will decide whether such a machine is needed, submit details and request binding approval by the end of May 2022. [20:10:46] * dilfridge yes [20:10:48] * marecki yes [20:11:24] * gyakovlev yes [20:11:34] * mgorny yes [20:11:35] * ionen yes [20:11:42] * ulm yes [20:11:57] * soap yes [20:12:03] Motion passes unanimously. [20:12:41] Onwards to the fun part! 3. Pre-approval of EAPI-9 features. [20:12:49] * marecki invites ulm to the grilling platform [20:12:59] https://wiki.gentoo.org/wiki/Future_EAPI/EAPI_9_tentative_features [20:13:12] this is supposed to be a small/quick EAPI [20:13:24] two profile features and one bugfix [20:13:33] and dilfridge wants eclass revisions :) [20:13:50] :) [20:14:08] been needing that bugfix myself, and rest looks sensible [20:14:13] dilfridge: maybe you could elaborate on this? the rest is pretty much self-explanatory, I thin [20:14:16] +k [20:14:17] ack [20:14:34] can we go over them one by one? [20:14:35] so the basic problem that we're trying to solve is the following [20:14:52] mgorny: yeah, bottom up :) [20:15:24] 1) if you change dependencies in an ebuild, you are supposed to do a revision bump (see the dynamic deps discussion) [20:15:53] 2) now what shoudl we do if the dependencies are introduced by an eclass, which changes? [20:16:12] Bumping all ebuilds that use it is theoretically possible [20:16:30] In practice, with stuff like perl-modules.eclass, used by 800+ ebuilds, this is hard and messy. [20:16:39] so [20:17:03] being able to affect overlays without revbumps would be a plus [20:17:14] the idea is to give eclasses an *internal* revision number [20:18:02] plus bumping lots of ebuilds implies git tear and wear [20:18:06] and to derive an *internal* "microversion" of an ebuild from all inherited eclasses, in a way that it always only increases [20:18:31] example implementation would be, [20:18:50] just to be clear, this microversion would be an implementation detailing only rebuilds? i.e. other ebuilds wouldn't be able to query or depend on them [20:19:04] every eclass declares ECLASS_REVISION=20220213 or similar, and the ebuild takes the latest of all [20:19:36] mgorny: exactly, that why I say *internal*... it's got lower precedence than the revision -rX and cannot be queried or depended on [20:20:21] mgorny: won't be part of the dependency syntax, but other details need to be worked out, I think [20:20:21] this means, when an eclass revision is increased, portage can rebuild the ebuilds that inherit this eclass [20:20:23] dilfridge: i.e. bumping ECLASS_REVISION implicitly bumps all consumers [20:20:24] e.g. should the eclass revision be exposed to the ebuild as a variable? [20:20:41] soap: exactly [20:20:45] ulm: that is up to discussion [20:21:00] we need to decide how it enters md5-cache [20:21:33] i suppose we could store it as a dict of (eclass -> revision) like we store eclass hashes now [20:21:59] this somewhat implies that eclass revision needs to be static [20:22:06] md5-cache is not part of PMS [20:23:17] in any case, there is still space for discussion of the details of both specs and implementation [20:23:20] still, we should define it's part of cache [20:23:33] there's one important question though: who's gonna implement it? [20:23:33] probably, yes [20:24:40] mgorny: we'll review progress later. this doesn't block pre-approval IMHO [20:25:26] if it's not ready then it'll be postponed to EAPI 10 [20:26:06] Indeed. [20:26:15] anything else on this features? [20:26:19] *feature [20:27:00] Shall we, then? Motion: tentative approval for the inclusion of support for eclass revision (https://bugs.gentoo.org/show_bug.cgi?id=806592) into EAPI 9, pending detailed specification and implementation. [20:27:19] s/revision/revisions/ [20:27:37] * ionen yes [20:27:41] * dilfridge yes [20:27:52] * marecki abstain [20:27:56] * ulm yes [20:28:04] * mgorny yes [20:28:08] * gyakovlev yes [20:28:27] (nothing against the feature itself but I'm getting a migraine and I just can't process these details right now) [20:28:50] * soap abstain [20:29:08] 5 for, 0 against, 2 abstain. Motion passes. [20:29:24] "EAPI of profiles defaults to repository EAPI" [20:29:37] Looks pretty self-explanatory... [20:29:26] next, profiles EAPI [20:29:28] bug 806181 [20:29:29] https://bugs.gentoo.org/806181 "[Future EAPI] EAPI of profiles should default to repository EAPI (in profiles/eapi)"; Gentoo Hosted Projects, PMS/EAPI; CONF; ulm:pms [20:29:44] we currently have about 400 eapi files in profiles [20:29:58] I can only feel "why not?" [20:30:08] the idea is to default to the one of the top-level profiles dir [20:30:28] this will be in the far future though [20:30:30] ionen: My thought exactly [20:30:43] profile EAPI updates require a long waiting time [20:30:51] but let's get it in now :) [20:31:25] let's vote [20:31:28] any additional discussion needed? [20:31:34] I don't think so [20:32:32] marecki: you come up with a motion, or shall I? [20:32:48] Motion: tentative approval for the inclusion of support for the EAPI of profiles defaulting to repository EAPI (https://bugs.gentoo.org/show_bug.cgi?id=806181) into EAPI 9, pending detailed specification and implementation. [20:32:50] * marecki yes [20:32:57] * mgorny yes [20:32:59] * gyakovlev yes [20:33:00] * ionen yes [20:33:02] * dilfridge yes [20:33:02] * ulm yes [20:33:14] * soap yes [20:33:24] Motion passes unanimously. [20:33:30] NEEEEXT! [20:33:31] next, comments in profile parent files [20:33:33] bug 470094 [20:33:35] ulm: https://bugs.gentoo.org/470094 "[Future EAPI] parent files should support comments/blank lines"; Gentoo Hosted Projects, PMS/EAPI; CONF; tomwij:pms [20:33:46] self-explanatory :) [20:34:01] The only thing I can say here is "why so late???". [20:34:32] Motion: tentative approval for allowing comments in "parent" profile files (https://bugs.gentoo.org/show_bug.cgi?id=470094) into EAPI 9, pending detailed specification and implementation. [20:34:39] * mgorny yes [20:34:40] * marecki yes [20:34:40] * gyakovlev yes [20:34:40] marecki: good question, seems nobody pushed for it [20:34:42] * ionen yes [20:34:42] * ulm yes [20:34:44] * dilfridge yes [20:34:58] might as well while making other profile changes anyway [20:35:11] * soap yes [20:35:13] Motion passes unanimously. [20:35:18] next, bug 815169 [20:35:19] https://bugs.gentoo.org/815169 "[Future EAPI] econf: Ensure proper end of string in configure --help output"; Gentoo Hosted Projects, PMS/EAPI; CONF; ulm:pms [20:35:24] basically, we're currently looking for strings like --enable-static in configure --help output, but packages may support options like --enable-static-foo [20:35:30] yes, yes, yes! [20:35:33] causing a false positive [20:35:38] have one ebuild I left on eapi=7 because of that atm [20:35:40] this was a major overlook [20:36:04] Sounds like a "but thou must" thing [20:36:09] in EAPI 4, I think :) [20:36:50] btw openjdk ebuilds die on --disable-static, so stuck on EAPI=7 for now or rolling custom econf... but I'll handle this later separately. just remembered that [20:37:06] imo, this is just a QoI issue, but w/e [20:37:46] well, that's one reason for a quick EAPI 9 [20:38:00] 8.1 [20:38:02] can we vote? [20:38:04] * mgorny hides [20:38:08] Motion: tentative approval for ensuring proper string termination in the matching of "configure --help" output in econf (https://bugs.gentoo.org/show_bug.cgi?id=815169) into EAPI 9, pending detailed specification and implementation. [20:38:08] lol [20:38:12] or just fix it in portage and pkgcheck [20:38:17] * mgorny yes [20:38:21] * marecki yes [20:38:20] * dilfridge yes [20:38:21] * ionen yes [20:38:23] * gyakovlev yes [20:38:24] * ulm yes [20:38:42] * soap yes [20:38:44] Motion passes unanimously. [20:38:58] thank you :D [20:39:04] ulm: Do you want to bring up that ML discussion, or is that beyond the scope? [20:39:40] marecki: about A/Go/distfiles? [20:39:46] Yup [20:39:56] tentative src_fetch_extra? [20:40:39] it's an ongoing discussion, and we're far from finalising anything [20:40:49] so it won't be part of EAPI 9 IMHO [20:40:53] * dilfridge unhappy about anything that circumvents manifests [20:40:59] well, i personally think we can reject src_fetch_extra straight away [20:41:00] how is this different to the 20-year old src_fetch discussion? [20:41:08] if only to save time on having to discuss it further [20:41:11] ionen: looks like openjdk is left out exactly because of that too ( it has --enable-static-build), so count+1 =) [20:41:20] it's not clear at all how a possible solution would look like [20:41:21] OK, let's leave it for another time (if any). [20:41:23] I see ;p [20:41:25] reminds me i still haven't pointed out the dependency proble [20:41:40] 4. Scheduling regular reviews of arch status by the Council [20:41:43] soap: it isn't [20:41:45] know not everyone is happy with it but repackaging works meanwhile [20:41:56] good, because it's just rehashing this old debate [20:41:57] and src_fetch_extra will make it a live ebuild [20:42:00] i have one more thing wrt EAPI 9 but i'll wait for open floor [20:42:03] so no keywords [20:43:17] Long story short: last month we agreed we want this to be regular but quarterly schedule got rejected. Discussion suggested once every six months (at least - more often is possible if necessary) would work but the question was, when exactly. [20:44:04] No responses to my ML post so it's up to us. My personal preference would be June+December. Any alternative suggestions? [20:45:11] June is normally the last meeting of the council term [20:45:43] but yeah, that doesn't block it [20:46:32] we can possibly decide while assigning chairs [20:49:29] mgorny: It's an option. [20:50:52] Anyone else in favour of the "assigning chairs" solution? We'll still vote on the motion but I'd like to know which one to post. [20:51:50] marecki: just put three-fold motion, i.e. vote A/B/no/abstain [20:52:49] I'm still not sure why you insist on formalizing it. "as needed" works for me, but so does jun/dec or chair approach too. [20:53:25] jun/dec wfm, chair is more complicated [20:53:55] gyakovlev: i agree, therefore my vote is "no" ;-) [20:54:02] gyakovlev: As mentioned on the ML, I would prefer to avoid having the chairs think every month whether this is due or not. Just schedule it in advance and don't bother with it. [20:55:24] And the status quo is that we talk about this EVERY month, via the open-bugs point of the agenda. [20:55:47] Ehh, mind too frazzled to formulate a multiple-choice motion. Let's just do it the simple way. [20:55:53] let's vote on jun/dec then? [20:56:09] Motion: the Council must review the status of Gentoo arch activity at least during the June and the December meeting of each calendar year. [20:56:12] * marecki yes [20:56:27] * mgorny no [20:56:30] * ionen abstain [20:56:31] * dilfridge abstain [20:56:32] * ulm abstain [20:56:46] * gyakovlev whatever/abstain [20:57:01] * soap yes [20:57:10] Motion rejected by meh. [20:57:26] hm? passes with 2 yes 1 no? [20:57:30] 2y? [20:57:30] it obviously passes [20:57:35] * marecki blinks [20:58:04] the most important thing is we talk about it every so often [20:58:09] Never mind, brain forgot how quorum works [20:58:17] 2 yes, 1 no, 5 abstain. Motion passes. [20:58:40] I'll write it up on the Council Wiki page some time soon. [20:59:17] 5. Open bugs with Council participation [20:59:30] ...of which we've only got one, Bug #823762 [20:59:32] marecki: https://bugs.gentoo.org/823762 "[TRACKER] ~ only candidate arches"; Gentoo Council, unspecified; CONF; gyakovlev:council [20:59:46] Is it just me or is x86 slipping again? [21:00:19] I thought Whissi was taking care of x86 again? [21:00:57] soap: If my own keyword requests are anything to go by, alpha and x86 are the last two arches to go. [21:01:26] And in alpha there are at least some legitimate excuses. [21:03:23] I've got half a mind to stop adding x86 to re-keywording requests. [21:03:45] jsmolic is doing x86, and he's doing a good job [21:03:48] dilfridge: Are your plots up to date? [21:03:51] reporting bugs upstream, etc. [21:03:55] yes [21:04:08] https://www.akhuettel.de/gentoo-bugs/arches.php [21:04:15] There is a move to give me some kind of dev box (or shell box to work on), and in that case I can help the x86 stabilizing speed. amd64 & x86 are the only stable arches I can't help in normal times, and it might be solved in near future. [21:04:40] arthurzam: jsmolic: is there a possibility you can look at setting up your tooling for keywording too? [21:04:43] and from the plots, the arches are all good imho [21:05:06] sam_: My tool of tattoo already works for keywording [21:06:11] excellent [21:06:27] Great. [21:06:29] my tools are also running keywording, fwiw all bugs are processed automatically, if there is delay, it's likely there was a failure, and for now we have to manually report all bugs which can sometimes create delays [21:07:44] Still, does anyone object to giving x86 the ppc treatment i.e. defaulting to not re-adding dropped x86 keywords unless there are good reasons for it? [21:07:56] marecki: so just to reassure you, x86 keywording is also being handled, but since sometimes it's hard to juggle everything just ping me directly if you think there is any delay (I may forget to report failures sometimes etc.) :) [21:08:27] I mean honestly, who on earth needs something like media-sound/easyeffects on x86 these days?!? [21:09:50] Aaaanyway. Now that we've got arch review as a permanent periodic agenda point, shall we close this bug? [21:11:30] marecki: re-assign to me, I'l like to keep it as tracker [21:11:31] +1 [21:11:43] just remove council and should be good to go. [21:12:13] (om not on dev machine, not logged in to bugzie) [21:12:18] but can do myself later. [21:13:16] Done. [21:13:33] 6. Open floor [21:13:47] just a quick thought from me on EAPI 9 [21:13:49] * marecki dons the flame-proof vest and hides behind mgrony [21:13:53] *mgorny [21:14:01] heh [21:14:11] * dilfridge prepares for the floor to open and swallow mgorny [21:14:15] back in the day we changed PMS to defer to metamanifest GLEP wrt manifests [21:14:29] however, portage/pkgcore still use oldschool Manifest implementation [21:14:35] and e.g. can't handle DIST entries outside current package [21:14:59] maybe we could use EAPI 9 as a switching point for Manifest support [21:15:43] IIRC PMS never described the Manifest format [21:15:57] i'm not talking of PMS, i'm talking of practical aspect [21:16:00] we only updated the reference from GLEP 44 to GLEP 74 [21:16:12] sure [21:16:23] i.e. say that for EAPI 9, PM needs to implement a subset of GLEP 74 and since EAPI 9, ebuilds can rely on that [21:16:27] I was commenting on this: <@mgorny> back in the day we changed PMS to defer to metamanifest GLEP wrt manifests [21:16:30] this could help distfile entry reuse [21:17:06] though not sure if the impl in GLEP 74 is really the best one for that [21:17:15] * ulm checks the current wording [21:18:30] mgorny: remind me, what part of GLEP 74 would we have to refer to? [21:19:35] https://www.gentoo.org/glep/glep-0074.html#modern-manifest-tags for a start [21:19:42] > DIST entries apply to all packages below the Manifest file specifying them. [21:19:59] so technically if we had e.g. dev-go/ category, we could put shared Manifest there [21:20:11] but i think i didn't think that one through very much [21:20:12] so they would be moved to the category, or to top-level? [21:20:25] it basically assumes that PM always has tto go from root down [21:20:29] top-level seems scary :) [21:20:46] alternatively, we could put a dedicated Manifest for them and use MANIFEST entry to include it [21:21:06] https://www.gentoo.org/glep/glep-0074.html#manifest-file-locations-and-nesting [21:21:18] not sure if we can refer to other manifests via ../ [21:21:25] or use the same Manifest in multiple locations [21:21:46] basically, i didn't really anticipate this use case back when GLEP was written [21:22:24] if it's not implemented anyway, a GLEP update shouldn't be a problem :/ [21:23:15] sub-manifest sounds better to me than distfiles in category manifests [21:24:08] well, let's hope we won't have to resort to that [21:24:21] but still, it's probably worthwhile to stop forcing old EBUILD/MISC/AUX entries at some point [21:24:38] ok, that's all i wanted to say [21:24:46] does anyone else want to take the floor? [21:25:02] but AUX saves some valuable bytes :p [21:25:41] but we can then move all of them level up and compress! [21:26:10] i mean, at this point we don't really need non-DIST entries in ebuild dirs [21:26:45] * gyakovlev learns how to decode gzip with bare eyes [21:27:08] two days ago we added a new field to all developer wiki pages [21:27:30] that shows all the packages a developer (with commit access) maintains [21:27:53] https://wiki.gentoo.org/index.php?title=User%3ASoap&type=revision&diff=1047737&oldid=418940 [21:28:10] I'd like to have this field be mandatory, just like on the "Gentoo developers" page [21:28:23] i.e., if a dev has commit access, it links to p.g.o [21:28:39] sounds reasonable. [21:29:01] especially if someone already fixed my page for me ;-) [21:29:06] I agree, not really seeing a reason to be optional (information is out there either way). [21:29:08] sure but if you want to push it via council.. next month [21:29:15] or via bug [21:29:46] can I get an informal take? [21:30:00] * mgorny informally yes [21:30:15] * ionen informally proxiedly yes [21:30:23] * dilfridge yes [21:30:24] * gyakovlev yes [21:30:36] * soap yes [21:30:39] * ulm yes (informally) [21:30:49] ok, thanks, will file the bug then [21:31:12] soap: how helpful is the link to p.g.o actually? it doesn't show packages maintained via projects [21:31:31] ulm: i think there's an option to enable that [21:31:33] it still shows packages maintained by individuals? [21:31:36] the page may even be empty for some devs [21:31:50] it shows project maintained stuff for me though [21:31:56] https://packages.gentoo.org/maintainer/ppc@gentoo.org [21:31:56] https://wiki.gentoo.org/wiki/Project:PowerPC [21:32:03] ulm: Can be set at https://packages.gentoo.org/user/preferences/maintainers [21:32:19] oh nvm that's different field [21:32:20] |Packages=Yes [21:32:20] arthurzam: thanks [21:38:36] anything else? sounds like we're done [21:38:53] Going once... [21:39:22] ...eh, it's been long enough this time. Thank you everyone, the meeting is now closed. [21:39:33] \o/ [21:39:38] thank you [21:39:40] thanks [21:39:42] thanks all [21:40:02] yay [21:40:37] * ulm changed the topic to "223rd meeting: 2022-03-13 19:00 UTC | https://www.timeanddate.com/worldclock/fixedtime.html?iso=20220313T19 | https://wiki.gentoo.org/wiki/Project:Council | https://dev.gentoo.org/~dilfridge/decisions.html". [21:41:05] next meeting collides with Chemnitzer Linux-Tage which is the major linux event in germany [21:41:20] so I may need a proxy