From 52827613f4fd277959719a1ca59db78b5d4e0d82 Mon Sep 17 00:00:00 2001 From: Aki Date: Thu, 23 May 2024 03:53:21 +0100 Subject: [PATCH] Add April (101st) technical notes (#320) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Add April (101st) notes * šŸšØ fix linting issues * Apply suggestions from code review Co-authored-by: Chris de Almeida Co-authored-by: jmdyck * Apply suggestions from code review Co-authored-by: jmdyck --------- Co-authored-by: ctcpip Co-authored-by: jmdyck --- meetings/2024-04/README.md | 26 + meetings/2024-04/april-08.md | 1049 +++++++++++++++++++++++++++++++ meetings/2024-04/april-09.md | 1139 ++++++++++++++++++++++++++++++++++ meetings/2024-04/april-10.md | 1117 +++++++++++++++++++++++++++++++++ meetings/2024-04/april-11.md | 1072 ++++++++++++++++++++++++++++++++ 5 files changed, 4403 insertions(+) create mode 100644 meetings/2024-04/README.md create mode 100644 meetings/2024-04/april-08.md create mode 100644 meetings/2024-04/april-09.md create mode 100644 meetings/2024-04/april-10.md create mode 100644 meetings/2024-04/april-11.md diff --git a/meetings/2024-04/README.md b/meetings/2024-04/README.md new file mode 100644 index 00000000..e60a491a --- /dev/null +++ b/meetings/2024-04/README.md @@ -0,0 +1,26 @@ +# 101st meeting of Ecma TC39 Summary + +## Proposals + +| Advanced to | Proposal | +|-----------------|-----------------------------------------------| +| Stage 4 | Set methods | +| Stage 4 | duplicate named capture groups | +| Stage 3 | new Function | +| Stage 3 | Make eval-introduced global vars redeclarable | +| Stage 2.7 | Promise.try | +| Stage 2.7 | Math.sumPrecise | +| Stage 1 | Error.isError | +| Stage 1 | Strict Enforcement of 'using' | +| Stage 1 | Signals | +| Withdrawn | Array.last | + +## Task groups + +* KKL has joined the conveners group for TG3; meeting cadence doubled to weekly +* TG4 has consolidated their code into one repo & is seeking input on organization (especially re tests) +* TG4 will be meeting in person in Munich June 24-25 + +## Other + +* Committee reached consensus on _non-binding_ guidelines regarding coercion (and not doing it) diff --git a/meetings/2024-04/april-08.md b/meetings/2024-04/april-08.md new file mode 100644 index 00000000..87a2bfac --- /dev/null +++ b/meetings/2024-04/april-08.md @@ -0,0 +1,1049 @@ +# 8th April 2024 101st TC39 Meeting + +----- + +Delegates: re-use your existing abbreviations! If youā€™re a new delegate and donā€™t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +You can find Abbreviations in delegates.txt + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Linus Groh | LGH | Bloomberg | +| Duncan MacGregor | DMM | ServiceNow | +| Daniel Minor | DLM | Mozilla | +| NicolĆ² Ribaudo | NRO | Igalia | +| Chris de Almeida | CDA | IBM | +| Jesse Alama | JMN | Igalia | +| Kevin Gibbons | KG | F5 | +| Michael Ficarra | MF | F5 | +| Jordan Harband | JHD | HeroDevs | +| Ben Allen | BAN | Igalia | +| Jason Williams | JWS | Bloomberg | +| Bradford Smith | BSH | Google | +| Ujjwal Sharma | USA | Igalia | +| Philip Chimento | PFC | Igalia | +| Sergey Rubanov | SRV | Invited Expert | +| Mark Miller | MM | Agoric | +| Daniel Rosenwasser | DRR | Microsoft | +| Jack Works | JWK | Sujitech | +| Istvan Sebestyen | IS | Ecma International | +| Ashley Claymore | ACE | Bloomberg | +| Mathieu Hofman | MAH | Agoric | +| Samina Husain | SHN | Ecma International | +| Mikhail Barash | MBH | Univ. of Bergen | +| | | | + +USA: And, okay, so letā€™s move on with the approval of last meetingā€™s minutes, the minutes from the last meeting have been uploaded on GitHub. Letā€™s give a minute for the approval. If you do not approve, as a reminder, to the last meetingā€™s minutes, you should speak up now. Great. + +USA: All right, now that the last meetingā€™s minutes have been approved, I would like to ask you all to confirm the adoption of the current agenda for this meeting. You might have checked it out on the agendaā€™s repo that we have. That said, if you have any objections regarding the agenda of this meeting, please speak up. + +USA: All right. With that, we have adopted the current agenda and weā€™ll move on with the meeting. Samina, are you prepared for the report? + +## Secretary's Report + +Presenter: Samina Husain (SMH) + +- slides (see agenda) + +SHN: Thank you. Great, thank you for the great start, Ujjwal, and details for the meeting and the overview. Yes, it is a solar eclipse day today, so I think the next one in this hemisphere is not for many, many years. From a timing perspective, I think itā€™s in the latter half of our meeting, depending on where you are. I am in the East Coast timezone, so it will be in the very end of our meeting and we will have totality for just over two minutes, so it should be an experience. And hopefully weā€™ll all have no clouds. + +SHN: All right, so just an update from what we have discussed from our previous meeting, which was the 100th meeting in San Diego. I just -- just an overview of what we will talk about in my short presentation. So just observation for everybody, the annex slides I will not go through and I will leave it all to you to review on an as-needed basis. It highlights what are the latest documents uploaded on the ECMA server, which you may find interesting. Not every delegate has access to it, but may get access or information from you chairs. Thereā€™s always information about the statistics and the participation weā€™ve been having in the meetings and when the next dates are. But I will leave that for you to review after. + +SHN: All right, I sent out, which many of you may have seen, to all members of ECMA an email a little while ago. We have a vacant position in our executive committee, so we are looking for nominations. It is typical that we do this only once a year, but it is also very important to be active and have engagement from members. So we have made the decision through the executive committee that we would do a nomination in a non-year-end term. So if you have from ordinary members somebody you would like to nominate. or if somebody would like to self-nominate, I would be happy to receive it. Just email it to me. Itā€™s a good opportunity to be very active in the governance and the activities of ECMA through the different technical committees that are also involved. Thatā€™s just a reminder. + +SHN: I also have a reminder on the ECMA approval. So we have already, thank you very much, the Edition 15 preparation that already began, that document was frozen very soon after the San Diego meeting. Iā€™m just making a reminder with the dates here, and I donā€™t think I missed it, but if I have missed it, you may correct me, but for ECMA 402 I have not seen the freeze of that document. If it has been done, thank you. If it has not been done, then please remember that we are very close to the timeline where you want to manage the 60-day opt out and the 60-day review before the general assembly, which then takes place in June. + +SHN: Some new projects and new members I want to highlight. I had already mentioned some of this in our previous meeting but we have a TC54 that has been active since December of last year. It is moving forward very well. We do regular meetings, if you are interested, the recordings of the Meetings (theyā€™ve always been recorded) are published on YouTube. I believe itā€™s publicly available. The information can be found through the cyclone DS website or you may just ask myself. It is also now done weekly and theyā€™re one-hour meetings and the committee is requiring the entire documentation and specification before it goes through approval for ECMA GA. Itā€™s a very extensive project and very well done, if you want to participate as a member, please do so. + +SHN: We also have a new proposal that we will be discussing at the April ECMA ExeCom, which is TC55, pending the name, WinterCG. Again, if thereā€™s interest from members of this committee on that, itā€™s excellent that we have a new technical topic to be working on. We already chartered the TC39 TG5 on the experience and programming languages. I hope thatā€™s moving forward. I believe our colleagues from Bergen are on the call and will give an update, and this is very good, so I look forward to this continuing to move forward. + +SHN: Very happy to welcome some new members. I had noted them as potentials on my last meeting. We have received all the applications. So they all are provisionary members today. So Replay.io, HeroDevs and Sentry, just little bit of a small document, but important one Iā€™m waiting for the RF to be signed and provide back to my attention. And then there should be of course complete participation also from Sentry. We have new members and we have a big committee, I just wanted to bring to the attention that when an application comes to the ECMA international secretariat and we start the processes, the member company after very minor exchange of documents become provisionary members. They may participate with their delegates in any TC they wish. It is important to note that they may not vote until they become an official member, which typically takes place after the general assembly. Just want to highlight that right now so the members that are new that have joined, they are provisionary members, and they do not have voting rights. In addition, I want to mention that in all of these members that have joined, there have been invited experts from those organizations to become -- to be part of the TC and check it out which then eventually become members. Your invited expert designation will change to delegate designation once you become an official member, and that can take place in June time frame. Just to let you know that your designation from going from invited expert to a delegate does not in any way change the level of the expertise you bring into the team. Just the title of the designation is just slightly different. So thatā€™s just any clarification. + +SHN: And Iā€™m just going to pause here. I canā€™t see if there are any raised hands or questions, but please ask in the queue and I will address them when I finish speaking. + +SHN: I also wanted to just lead from that as we have many new members, and we are a big group, and we also have valuable invited experts that I just wanted to highlight some text here. The first two bullets are texts that come directly from the invited expert application from the ECMA perspective. We are very happy to have invited experts. You bring great value. And typically invited experts are not necessarily representing a member or any organization. But in the event that they do represent a member, then -- or encourage them to become a member, then of course the designation changes, as Iā€™ve mentioned. I also want to highlight again that invited experts do not have voting privileges. You have the privilege to be part of the committee, bringing your valuable input, and have lots of discussion. When it comes time to do any kind of temperature check, as weā€™ve done in the past, then typically when we donā€™t vote. But if such an extreme case were to happen, invited experts as provisionary members are not permitted to vote. I also want to remind from a voter perspective or even a checking the temperature perspective as we had to do in the November meeting, is very important that member companies, there are over 20 member companies on TC39, I think they are exactly 27, you all and a delegate, you all have one vote and a discussion in the temperature check. It is important that the invited experts and the observers do not necessarily participate in that. So itā€™s a difficult one to manage, but I look to the committee to take care of that appropriately. + +SHN: Again, as we have new members and we have new invited experts always coming on, and thank you, USA, you already very clearly mentioned that the chairs always mention at the beginning of every meeting that there is a Code of Conduct committee. I just wanted to take the time, Iā€™ve been through your website -- your GitHub page. Iā€™ve just extracted some key words. I want to remind as the committee, as we work together, itā€™s very important that we work together in a very open and constructive manner on any of the channels that we are engaging for this meeting. Just as a reminder, Iā€™ve put the up keywords we do tend to use this, I know we have lots of hot conversations, deep discussions, please always remember this, and continue to be as productive as we are. + +SHN: My last point on my main slides is some feedback on the review of the solution for a PDF version. I do see that there is a slide set that has been prepared and a lot of effort has been made by Kevin and Michael. Thank you very much for that. I will wait for your discussions. I think you are very soon. In the agenda item. It was just a place siting on my slides to remember we do discuss that. I look forward to that feedback, and of course finding the next steps. And thatā€™s the end of my slides. As I mentioned, everything in the annex, I will leave you to review, which are the documents that are uploaded. The statistics that are there, and of course, the next meeting information for the schedules that we have for not only TC39, but the G and ex-e comm. And with that I will stop sharing and be happy to address any questions. As I can see, there nothing on the queue. Okay, there now a question. Nicolo? + +NRO: Yeah, you said that temperature checks have basically the same rules as votes. Unless I misunderstood. I find this weird, because except for a couple times, we used temperature check as a non-binding ways for champions to see what is the general opinion. So I think itā€™s good to, like, let experts participate in that. + +SHN: Thank you. Typically, Iā€™ve only been involved in one temperature check in my many meetings that Iā€™ve been involved. And I can only use that as a reference. I do remember it being quite extensive. So thank you for that. And, yes, I understand that itā€™s for the committee very important because you have different contributors. So I think that would make sense to enable you to have that conversation in that sense. I donā€™t think youā€™ve ever had to vote, because we donā€™t vote, even in an extreme sense. Could somebody just confirm that for me. + +DE: I agree with NRO that temperature checks do not constitute votes. They were very, very explicitly formed for this light weight purpose. However, even though we almost never possibly only when referring to a specification to ECMA have an official ECMA-style vote with one member having one vote, we do frequently request consensus on stage advancement in TC39. Itā€™s long been kind of ambiguous with different delegates having different opinions on what our process currently is, whether invited experts can veto. And in my opinion, although temperature checks are not a vote, asking for consensus is sort of a kind of vote. We do definitely operate in a way that different delegates from the same member organization can kind of individually express their opinions, and I think thatā€™s important so that we can maintain the intellectual integrity of how the commit committee works. But at the same time, blocking a proposal in serious action that we can legitimately decide whether itā€™s restricted to ECMA members or includes invited delegates. + +SHN: Yes, may I just make a comment. So thank you both for that clarification. If you would allow me to think through this and then I will not change anything as you are doing now with the temperature check. And I appreciate this feedback, so I can make a better statement at the end of learning all of this. Go ahead, next. + +JHD: Yeah, just echoing all that. Temperature checks are not votes, and weā€™re sort of allergic to votes, and only agreed to do the temperature checks in the first place when we all agreed they werenā€™t votes. So I think thatā€™s just miscommunication. As far as invited experts, I mean, the point of consensus is getting everyoneā€™s opinion taken into account and member status has always been irrelevant for that. If youā€™re participating in the meeting, you participate in consensus, empirically. If we need to change, that I think we should have a separate agenda item about that and not derail this topic now (and that would probably need consensus to change). + +USA: Letā€™s not go too deep into this discussion, given this is purely a process related thing on a secretary report. But to move on with the queue, SYG, youā€™re on the queue next. + +SYG: If invited experts can participate in the consensus process without restriction, Iā€™m confused why anyone would ever become members? Like, you would need one member, invite everybody, and then we would just all participate. + +DE: So I think thereā€™s some confusion about how policy is applied and adopted here. The Ecma level policies are set based on Ecma rules and bylaws, which are adopted at the Ecma general assembly, which really all of you are all members have the right to participate in the Ecma GA. We also have these ECMA executive committee meetings, which welcome many people around Ecma. Although Ecma official votes are by ordinary members, in practice, these things work by kind of the similar to TC39 consensus building, among all attendees, not only the ordinary members. If youā€™re interested in Ecma policy, I think thereā€™s a lot of work to do, and it would be great to collaborate here. Ultimately, we selected Samina as the secretary-general or sort of executive director of Ecma in order to be our trusted administrator to apply these policies. So I think thatā€™s the appropriate level to work through some of these things. Iā€™ll be really interested in getting more involvement from the committee in all aspects of this. Thanks. + +USA: Thank you, DE. + +SHN: I really appreciate the feedback and inputs that youā€™re providing here. It gives more clarity, and Danā€™s absolutely right, we will be having these discussions at the ex-e comm and the GA. So no changes are being made, and itā€™s very important that we have a common understanding. So all the input is very valid and we will be very pragmatic. And Iā€™m not sure if I can see the queue right now, so, Ujjwal, let me know if thereā€™s any questions. + +USA: Thanks a lot, Samina. Also thank you for up supporting us here. Next we have a reminder by JHD. + +JHD: Yeah. So the reminder is just that we in TC39, we use GitHub Teams to keep track of, you know, permissions and stuff, but also who is a delegate and for which member and who is an invited expert and so on. So if you are the kind of point of contact of your member company, please review the GitHub team delegates team for your member company, and if there is anyone who is missing or who should no longer be on the list, please file the appropriate admin and business issue for each those changes so that we can keep things up to date. Thanks. + +USA: Thank you, JHD. Also for being our super proactive administrator. Moving on with this. + +### Speaker's Summary of Key Points + +The report covered several topics, including updates on previous meetings, new projects and members, and reminders about upcoming deadlines and procedures. + +An overview was provided of the Slides, noting that documents in the annex slides would be available for review but would not be discussed during the meeting. + +The vacant position in the executive committee was noted and encouraged nominations from members. It was emphasized the importance of active engagement and participation in ECMA governance. + +The attendees were reminded about ECMA approval processes and upcoming deadlines for document freezes. The importance of timely action to ensure smooth progress was highlighted. + +Updates on new projects and members were provided, including information about TC54 and a proposal for TC55. Participation from the committee members was encouraged in these initiatives. + +Additionally, the new members were welcomed and explained the process for becoming provisional members. They clarified the distinction between invited experts and delegates, noting that only delegates have voting rights. + +During the discussion, questions were raised about temperature checks and invited expert participation in consensus-building. The feedback was acknowledged and further clarification on these topics will be provided. + +## Ecma recognition awards reminder + +Presenter: Chris de Almeida (CDA) + +- (no slides) + +USA: Next up we have to remind you about the ECMA recognition awards. As you might know, we -- ECMA is in the business of giving out recognition awards to all of our colleagues who have put in a lot of their time, as well as work in making JavaScript better for everyone, and the chairs are requested every couple of months to submit anybody they think that -- from the committee who should be given a recognition award. Thereā€™s a few positions within the committee that you are familiar -- you must be familiar with, where automatically youā€™d be considered for a recognition award. But apart from that we should also be mindful of the amazing work that our colleagues have been doing. So I would like to request you all -- oh, CDA, you have slides for this. Would you like to take this, then, or should I justā€¦? + +CDA: I think you did a good job. I just had these slides from a previous meeting. Yes, I probably should update this data. Theyā€™re approved at the GA meetings or reviewed and potentially approved at the GA meeting. So let us know if you have anyone good in mind. And itā€™s helpful to have, like, the nomination, like, text itself, but donā€™t let that stop you from, you know, providing your idea and we can help you with that as well. Thank you. + +### Speaker's Summary of Key Points + +- Reminder to take action on Ecma Recognition Awards + +## ECMA262 Status Updates + +Presenter: Kevin Gibbons (KG) + +- [slides](https://docs.google.com/presentation/d/1mXvZ-hgyjFca8BzqZ5jzFSa3AdCcr3rFkINaeZ857nA/edit) + +KG: Okay. Good morning, everyone. Or good whatever it is. This will be your typical brief update from the 262 editors. Weā€™ve landed a handful of normative changes. I believe so the first three of these were decided in the previous meeting. This last item was -- in the process of tweaking some of the, I believe this was related to the first item where we are tweaking the semantics for CSP for foul. We accidentally made a normative change that was not intended and did not have consensus, and that no one caught during the review process, so we made another normative change to put it back. We donā€™t ask for consensus for bug fixes like that because theyā€™re just restoring the -- what was already consensus, but weā€™d like to call them out anyway in case anyone was confused by the previous state or seeing that change go through. + +KG: Not much in the way of editorial changes to call out. The first one here, weā€™re calling out really only as an example of sort of a design principle. Which is that there is an abstract operation for finding the index of one string within another. Basically string.prototype.index of. And we tweaked it so that instead of returning negative one when the value wasnā€™t found, it returns a special value, and this is the approach that the editors intend to take for operations like this going forward, and if youā€™re writing a proposal and designing such an abstract operation yourself, we recommend this approach to you as well. It just generally makes it easier for readers and since the spec is not code, itā€™s not like thereā€™s any overhead associated with returning a special value instead of returning negative 1. Not returning a number doesnā€™t actually affect anything. This isnā€™t C. So just something to keep in mind when designing things like this in the future. + +KG: Also, quite a few miscellaneous use cleanups and consistency fixes that we donā€™t feel are worth calling to the committeeā€™s attention, but if curious youā€™re welcome to view the commit history on GitHub. Not going to go through the list of upcoming work, because it hasnā€™t changed. We are still plugging away at a bunch of stuff. Most of these things weā€™re not actively working on. Some of them we are. + +KG: And then the last thing that I wanted to call out before we give it over to Michael to talk about the PDF is just a reminder, as Samina mentioned, ES2024 is frozen. We are currently in the opt out period. We do not intend to land any further changes to the specification unless thereā€™s editorial changes or perhaps bug fix that are discovered in the next brief period that we think are actually worth back boarding to the present specification, but we donā€™t anticipate that happening. And thatā€™s all I got. Thanks for your time. Any questions before I move on? + +USA: Thank you, Kevin. Thereā€™s nothing on the queue. + +## update on automatically producing print-quality PDFs + +Presenter: Michael Ficarra (MF) + +- [slides](https://docs.google.com/presentation/d/1kCDZewoWZtL26FnFSiK4tr6ZotGo7TR0_AZGlCwHiEQ/edit) + +MF: So this is about the print PDF generation. As some context, for a little bit now, weā€™ve been back and forth with ECMA trying to find a solution for creating good print-quality PDFs that are up to ECMAā€™s standards. And most recently, I had agreed to try the approach that AWB had used to see if we can -- or try to improve that approach to see if we can make it little or no effort to achieve fully automated PDF generation that meets our standards. So some more background, if you werenā€™t aware, AWB has been contracting with ECMA to do PDF layout on a yearly basis for the last few years, for 262 and 402, so -- two or three years. So AWB was using a tool called Paged.js. It uses the CSS Paged Media standard, so it just is a polyfill and allows browsers to support that standard. Which no browser supports today. Together with a lot of manual tweaking, and weā€™ll see some examples of what is needed there, so this is a lot of work and we would go back and forth with AWB during review and every time we would do a review, he would basically have to start at the top of the document and work his way down to do all the manual page breaking. Very, very manual process. AWB is not going to be doing this anymore, and as I said, ECMA wants us to see if we can do this ourselves. Now the editor group is not really interested in taking on multiple weeks worth of work every year to do it manually, so weā€™re trying to use the tools as best as we can to get automatic generation. If youā€™re not aware, ECMA has this standard for standards called TOOLS-011, which explains in excruciating detail all these details about how the specs should be presented and a lot of details about the contents as well. So I tried to follow everything within there as best as possible. And weā€™ll get to what I could do and couldnā€™t do. + +MF: So definitely a lot of success. I was able to implement all the page header and footer stuff. Change all the fonts and dimensions to match what was in TOOLS-011. Get all the page numbering restarting and everything, which was very hard but surprisingly Paged.js was able to handle. Get an automatically generated table of contents, which was great. And more. So I was pretty happy with what we were able to achieve. And there were a bunch of rules that I have written that unfortunately Paged.js does not support at the moment. And these are pretty important rules to follow. And this is what most of the manual effort will have to subsidize. So there are certain places where page breaking should not occur, and Iā€™ve written rules for all of those things using CSS and unfortunately Paged.js just doesnā€™t respect them. And each of these rules are written because there is currently a violation, and each of these violations would need to be discovered and manually addressed to be manual page breaks or via, like, splitting tables, splitting lists, which is not an easy process. Itā€™s a lot of careful work. So if Paged.js was improved, we could theoretically have all of this for free, automatically. + +MF: And then there are some things that I did not implement. Iā€™m not going to go into details on each one, but some of them were because the editor group just doesnā€™t feel that it would improve the quality of the document. We would rather not follow TOOLS-011 in some spots. Some of it was just because I didnā€™t want to do the work, if we donā€™t know if weā€™re going to go this full automatic route because of all the other insufficiencies. Itā€™s just a mix there. All of this we could do if we wanted to. Some of this we could do if we want to go down the route of fully automatic generation. + +MF: The biggest warning I have here is that itā€™s not just that Paged.js creates layout issues, itā€™s that it may accidentally introduce errors in the document. So the PDF cannot be a canonical resource. Itā€™s very difficult to notice when table rows or even just individual table cells are lost, or an algorithm step is missing between pages, or itā€™s just pushed off to the side sometimes. Thereā€™s some weird things it does. Sometimes the page number references get off, and Iā€™m not sure why. Thereā€™s just subtle bugs, and, like, really, really hard to detect bugs. You have to go through the spec line by line, carefully reading it, and to try to find those. So even if we had all of the features, I still donā€™t 100% trust the tool because itā€™s pretty buggy. Even though it is very impressive. Itā€™s a very impressive tool. + +MF: I have some samples. Here a section of the automatically generated table of contents. Actually, it's a little bit outdated. It looks a little bit different than this now. Just an example. You can see itā€™s using the Roman numerals before the first clauses start. + +MF: Here's some examples of some failures, so if you look at the bottom of 230, it splits very weird where the first column and then half of the second column are on 230, and then half of the second column and the third column are on 231. So it makes it look like itā€™s supposed to be another row, but itā€™s not. Itā€™s a row split in half. + +MF: Here you can see the table header on the bottom of 223 is split from the rest of the table, so you wonā€™t have that on the next page. So thatā€™s the kind of thing that needs to be not split, but get pushed to the next page. + +MF: Here you can see the note when it is split across pages, it has weird layout because the actual note part isnā€™t pushing it. Like, Paged.js isnā€™t supporting flexbox there. + +MF: This is actually a good example where you can see, like, figures are automatically fit nicely onto pages in the bottom right. And here is another example of annexes are supposed to follow strict formatting where theyā€™re supposed to say annex whatever, like, centered at the top, and then whether itā€™s normative or not, which I think all of our annexes are, maybe. And that kind of stuff, so thatā€™s, like, a thing that we can change. + +MF: Conclusions here are, at the moment, it is definitely the case that we cannot automatically generate PDFs using Paged.js that are up to the quality standards of ECMA, especially given that TOOLS-011 has a lot of requirements. If we were to do the process that AWB has done in the past, where we manually address all of the things that Paged.js cannot do automatically today, I would estimate it would take between 50 to 100 hours. A lot of this is, like, it takes like five minutes for Paged.js just to do the render so that we can see if the change we made had the correct effect. And you're going to repeat that, you know, 1,100 times to do every page of the document. So that's a lot of work. And I would do that work once, maybe. But it's work that would have to be redone every year. And I'm really not interested in that. I don't think anyone on the editor group is. + +MF: So I give a few options that we have here. There may be others that Iā€™ve not thought of, but these are the things I can think of that we can move forward with. First one is what we recommended in the past -- no, second one is what we recommended in the past to ECMA, there are layout services that do this exact thing. Itā€™s called layout. People usually hand like a manuscript for a book to these services and they lay it out as a book. Itā€™s not like a design service or anything, they just want it to break nicely. So, yeah, the first option is we could ask somebody else to do this manual breaking using the HTML document and have Paged.js just render the thing as we want it.Third option is we could work on Paged.js, somebody could improve Paged.js. I donā€™t know how long it would take to fix those bugs and add this additional support that we need for the couple of manual breaking features. But itā€™s possible that that could be done in less time than it would take to do the manual splitting even once. But Iā€™m not sure, because that code base is -- itā€™s like maintained by a single owner. Itā€™s one of those. So we could also hope that the browsers implement CSS Paged Media. If they implement the standard as-is, it would support all of these things. Also CSS Paged Media I learned through this is very great and Iā€™m very appreciative of it. I donā€™t see that happening anytime soon. And the last option is we just accept it the way it is. Without Paged.js. As-is print to PDF doesnā€™t introduce errors but it doesnā€™t have any of the features that we had on the first slide, like the page header and footer and numbering and table of contents and everything. That would all be lost, but at least it wouldnā€™t have errors in it. I donā€™t really have a preference between these. I donā€™t particularly care for whether the PDF meets the standards, but our ECMA representatives here really do, and I respect that. So weā€™re doing what we can to solve it. But this is what I see our options are moving forward. So Iā€™d love to hear if thereā€™s any feedback. + +SHN: Thank you. Thank you, MF, for taking the time to go through this, and also, KG, I know that weā€™ve had a number of back and forth, and, yes, I understand that the relevance of this may have different -- relevance of having a PDF document may have different weights between ECMA and the -- and the committee. But nevertheless, itā€™s still important. The options that you mentioned in your conclusion, MF, are they in order of some priority or is there order of which is the most recommended conclusion? + +MF: No, theyā€™re just the order that I happened to write them down in. + +SHN: What would you say is -- could be the most path of least resistance in finding a solution in your order of conclusion? + +MF: So I -- I still would probably recommend going with a professional layout service, number 2. In the past, we had done some research for ECMA when requested to find the layout services. We found, I think, four of them, and the price varied quite a bit, but it ranged from like $1,000 to $5,000, and this would be a per year cost. And I think that they would do a better job than even the best work that we do with Paged.js. We would probably have to give them -- we would just give them TOOLS-011 and a couple of exceptions to TOOLS-011 that as I said, the editor group would prefer not to make. And then they would produce something better than what we could. + +SHN: So if I understood correctly from the work, and it also validates -- some of the comments already that Allen made, that much can be done, but there a manual process, and I know that that manual process obviously can be quite tedious. Youā€™ve also noted that. In using option 2, to pay a layout service to do this, and giving them the TOOLS-011 as you mentioned, would that somehow alleviate some of those manual things that still need to be done? + +MF: So they would work with the original source document and we would not try to do the print to PDF using Paged.js. There would be no manual process for us. Their process is manual. They hand lay out the document. + +SHN: Okay. I mean, I would like to -- Iā€™ve looked at your slides already once and I appreciate the feedback you said on the meeting. Iā€™d like to review some of it. I may come back to you, some specific questions. My last question, if I may, and Iā€™m sorry if itā€™s before my time, but the recommendations that were made by the editors on where we could find some of the solution, would you forward that to me again. + +MF: Yeah, I can dig that up. + +SHN: That would be appreciated. Because we have such a short time or the timing is very critical until June, you have requested Allen to dot one last time enable me a bit more time to find a solution. He is mulling it over, so in the meantime, if you could send me that, we will find a solution. I appreciate this feedback, and I will come back if I have any deeper questions. Thank you. + +MF: And the amount of manual work AWB would have to do this time should be significantly reduced. Itā€™s just adding the manual breaks and doing some table or list splitting where appropriate. He wouldnā€™t have to redo any of the numbering stuff. That should be mostly handled. Iā€™m still not 100% confident that paged.js would not introduce a bug, but if itā€™s reviewed properly, it should be less work than previous years. + +SHN: Yes, you had mentioned that there may be some errors, the warning you mentioned. So thatā€™s important to take care of. Okay -- and you did this just for 262, because I think for 402, we didnā€™t have any issues? + +MF: About all the work I've done, it should apply to both because they both using ecmarkup, and the vast majority of the changes I made were ecmarkup to improve the print specific CSS. Breaking rules are in there as well. If Paged.js is improved, if we take number 3, like we asked somebody to implement those last couple of manual breaking overrides, both of the documents should layout entirely correctly. + +SHN: Okay. You mentioned the 50 to 100 hours. Thatā€™s for that manual layout? + +MF: That is based on how long it takes to do each render, versus how many manual changes there are per page, times how many pages there are. + +SHN: My last comment, thank you and then I will stop because ā€“ is there an opportunity from the editors group ā€“ the work that we give to a third party, to do this, can some of the work be shared, by the editors group, or is your conclusion all the work should be to a third party? + +MF: If I understand the question correctly, if we like went with a layout service, number 2, then they would start with the HTML document and about a lot of the ā€“ I guess all of the changes that I have done, they wouldnā€™t be relevant there, because they wouldnā€™t be doing print to PDF. + +SHN: Mm-hmm. Okay. Just trying to estimate the efforts. Okay. I think I do need to maybe ask you a couple of questions off line just to clarify some things. And we can move forward. Thank you, I appreciate the feedback + +USA: Next on the queue we have KG + +KG: I want to emphasize that Paged.js is open source software and it is actively being worked on. They have a beta right which I have been filing bugs against. Itā€™s possible that they will improve in the future to the point that itā€™s accurate for our needs. In fact, itā€™s decently likely, I think. And it certainly becomes more likely if they are sponsored. So I do think that itā€™s worth considering the possibility of trying to have ECMA sponsor Paged.js to improve their software. And/or to have people at Paged.js specifically take the document that we are working on and ask them to do the layout like sponsoring them to do the layout on that, which they are likely to do by improving the software in a way that would be usable for us in the future. Thatā€™s all. + +USA: Next on the queue we have CDA. + +CDA: Yeah. Some of the contents seem to imply that the PDF versions generated using this may have some mistakes. Do we have a sense of the extent that that might be? + +MF: So the mistakes are mostly when itā€™s doing like these weird breaking, which we would avoid through the manual break overrides. So I would say that itā€™s greatly reduced because AWB would have gone through and broken in reasonable spots. But thereā€™s still a chance, and AWB is very careful. Itā€™s possible that he also carefully reviewed every single line and every single page number reference and that kind of stuff and just did assure that they are correct. But I donā€™t know beyond that. + +USA: I am next on the queue because I need to make a clarification. We also previously discussed briefly the 402 spec. And on the question that CDA just asked, I wanted to clarify. My understanding is that it is, you know, ultimately a best effort kind of a task. At least for the editors. So out of all of the options that MF listed out, what we use is basically the fifth one. We would inject some CSS into the built spec and then print to PDF. It might have to be done in a certain browser that would produce the best results and then we sometimes go through the parts of the spec, through the tables and stuff and make sure nothing is clipped. But ultimately, there have been some mistakes in the past PDF versions for 402 and thatā€™s not entirely avoidable. But the discussion today is more about how to do a better job than we have in the past. + +DE: So thank you so much MF for preparing this presentation for going through this exercise. This is really helpful. Now that we are probably going towards making a request for, you know, budget allocation for ECMA to solve this problem, the lack of volunteers in committee to do this manual work, I am wondering, I want to ask SHN what is the timeline we have to make this budget request. In the past, when we made budget requests we have had some problems getting them in time for ECMA process. + +SHN: For the budget, the budget for 2024 is already built. If I would say for this year, I would hopefully find a solution again, which is effective and works for everything through Allen. I have some budget for that, depending on what we like to do, to make sure this is going on, on going in the future, I need budget information at least by the third quarter when I start doing my budget planning. So we have a little bit of time for 2025. + +DE: If Allen is open to it, which he said he wasnā€™t, maybe that changed, that sounds like a good plan to me. If Allen werenā€™t available, I donā€™t understand why we couldnā€™t redirect our budget to another solution. + +SHN: Correct . So that can be ā€“ I just assumed it would be difficult to find a solution immediately for June. without knowing more, until I left-hand to this presentation. I also asked Allen to consider it. It is mulling it over. I donā€™t have a firm yes or no from him at this point in time. But that funding would be used, if he chose no, we still have to find something + +DE: Great. That sounds like the budget has been allocated, if heā€™s unavailable, we will finalled the solutions, if we worked to have something by June. Is that correct? + +SHN: Thatā€™s correct. + +DE: So I guess Michael and Samina will be in touch will be about this so the details can be understood? + +MF: I would like to add in the previous discussions with the layout services, the lead times were not terribly long. If we have a month, thatā€™s probably fine for all of the layout services. + +DE: Okay. Thank you. + +USA: Thank you, MF. Would you like to make any concluding remarks? + +MF: I donā€™t think so. + +### Conclusion + +- MF has incorporated AWBā€™s PDF generation advice, and found that it will still take a week or two of manual work to produce a high-quality PDF. There are no volunteers among the committee or editors to do this work. +- For 2024, AWB will (somehow) do this work again. TC39 requests that Ecma include this work in future budgets, as it has done so for 2024. + +## ECMA402 Status Updates + +Presenter: Ben Allen (BAN) + +- [slides](https://notes.igalia.com/p/fxx00_k5K) + +BAN: All right. And hopefully that is visible. We are currently in freeze, and so there are no normative changes. We have a number of relatively small editorial changes. Many of them largely metachanges, README in, stuff like that. But we do have several editorial changes related to better adhering to BCP47. All of these are changes to the algorithms that are editorial because they involve language tags we donā€™t use. + +BAN: So the first of them is we previously mishandled single letter BCP47 tags. It wasn't actually a problem because we donā€™t use those. In order to clarify the algorithm and make it correct we have made it actually correct. We have also done several refactors to simply make our locale resolution algorithms look more like BCP47 algorithms. And likewise, our default locale doesnā€™t generate any tags with `-u-` extensions, and we have better documented that. + +I would say the most meaningful editorial change is we had previously had an alias name that was confusing, dataLocaleData. We had locale localeData and dataLocaleData locale. We have changed the alias to resolvedLocaleData, and made several changes to the names of associated things. We have capitalized some slot names in 402. There are some slot names that have to be lower case because they are in the namespace as certain pattern strings that must be lowercase. However, here are some that didnā€™t need to be lower case. We have ā€“ camelCased these to adhere to the standard used in 262. + +This next is unrelated to the BCP 47 changes. Previously the DateTimeFormat spec used ambiguous and non-standard language for table iteration. That is fixed. And also there are some steps where we had steps to lowercase strings that were already guaranteed to be lowercase. And then thereā€™s a few meta changes. Most notably, this one I like: README and notes we have updated old references to the master branch to main, since we are now using main. Finally, we were missing a LICENSE.md file. So we have added that. And that is it. Thank you. + +## ECMA404 Status Updates + +Presenter: Chip Morningstar (CM) + +CM: So, as usual, not much to report. JSON is in its happy place. + +USA: Good for JSON. + +## Test262 Status Updates + +Presenter: Philip Chimento (PFC) + +Slide contents: + +- Since January, a certain amount of Igalia's test262 development has been subsidized by [Sovereign Tech Fund](https://www.sovereigntechfund.de/). +- You may have noticed that many more tests landed in Q1 2024 than in Q4 2023 +- Worked with proposal authors to review tests and ensure coverage for **RegExp modifiers** and **Set methods** +- Wrote tests for a [needs-consensus PR](https://github.com/tc39/ecma262/pull/2600) that had long been blocked on test coverage +- We'd like to encourage proposals to help write **testing plans**. Providing good documentation for this is high on our list. Let us know what you think about this! + +PFC: All right. So I have a few status updates from Test262. One, happy piece of news that I can report is that since January, a certain amount of the Test262 development that ā€“ we have been doing is by Sovereign Tech Fund. You can click that link for more information about this fund. They have been funding a lot of foundational infrastructure in the past year or so, and we are happy to add Test262 to that. Not unrelated, you may have noticed that many more tests landed in Test262 in the first quarter of 2024 than the previous quarter. So this makes a difference. + +PFC: Since the last update in February, we have worked with proposal authors to review tests which resulted in Test262 now having full coverage RegExp modifiers and set methods, which were PRs that among other people, the proposal authors contributed and have now landed. There are now tests for a needed consensus PR that had long been blocked. It had been open for a couple of years. This is the AsyncFromSyncIterator normative change. + +PFC: Another thing that we have discussed about how to make our process easier to navigate for proposals is that we would like to help write testing plans for a proposal, when it enters Stage 2.7. Testing plans are not a new thing. They have been around for a long time. Often they are written by the Test262 maintainers. And really the proposal authors are the ones who have the expertise to write these. So itā€™s high on our list to provide some good documentation for how to write a testing plan. And if you have thoughts on this please let us know. And I would be happy to answer any questions. + +USA: Thereā€™s none on the queue. We can give it a second. No questions. Well, thank you Philip for the update. + +## TG3: Security update + +Presenter: Chris de Almeida (CDA) + +- (no slides) + +CDA: TG3. Meeting regularly. Lots of great discussion. TG3 previously had an APAC-friendly time for our APAC friends to attend. However, these meetings were attended quite poorly and for a long time, attended by none of our APAC friends. We are happy to meet at an APAC-friendly time if we are getting them in attendance. But until that time, we are not going to do it due to the attendance. But we will bring it back, if the need arises. We have also, as part of that, when trying to figure out when we would like to move that APAC-friendly meeting to, we resolved to use the same time as our other meeting, but increase the cadence from every two weeks to weekly. So those are at the same time, which is at 12 central time. + +CDA: And the other item we wanted to take care of here was, with the increase in meeting cadence, we could use some more support in the convenors group. So KKL has agreed to join the convenor group, pending the approval of the committee. So I am requesting consensus for KKL to join the TG3 conveners group. + ++1s from JHD, CDA, NRO, MM, JKP, DLM + +### Conclusion + +- KKL has joined the conveners group for TG3 +- TG3 meeting cadence increasing to weekly +- APAC-friendly meeting times being removed from schedule due to limited attendance + +## TG4: Source Maps + +Presenter: Jon Kuperman (JKP) + +- [slides](https://docs.google.com/presentation/d/1t2Fu12Dc8kfe27rNDGVAfCXDXkU5cKC32qryJ_aiDug/edit?usp=sharing) + +JKP: Cool. Just giving a quick date on TG4/source maps. And some of the things we are working on. I think the first big thing is that we began work on the test suite. Last time we talked about how we will internally gate proposals, trying to get by from implementers and test coverage. We began working on a test suite. The goal is similar to Test262, in the sense we want to have extensive coverage for all of the features and spec of source maps. Unlike Test262 we will want suites of tests that run in the generator build tools as well as debuggers and browser tools and then also error monitoring tools. We are ā€“ some tests can be shared among all 3. Thereā€™s a link in the agenda. And we would be eager to get any feedback from folks that have worked on this type of stuff before, as far as how to organize the test looks like, things like that. + +JKP: The next thing is that we had historically had two GitHub repositories. We had one for the specification and one RFC and features. We merged them together. Itā€™s found under tc39/source-map. Thereā€™s two more PRs to move on. As far as everything is else concerned, this is the new source for everything. + +JKP: The big feature we are working on, which we have been calling the scoped proposal, is a proposal to embed information and source maps, which allows debuggers to like reconstruct applications, scope, sets a break point or original variable names, including break points and stack traces and also showing all the new stuff, not show anything that is added by build tools or a compilation. We have been grouping on it. We have begun work on implementation. We are helping guide the specification itself. We have a link here to the proposal. We would love any feedback, especially if youā€™re involved in debugging or source maps generating tools. We have been making good headway with existing specifications. We keep finding text like this where it says that the VLQ values are limited. Do we error if itā€™s not? Or is it invalid if itā€™s not? We have been having some great progress hardening the existing spec, making it more clear and working with consumers and generators to see what they are currently doing. + +JKP: We have another RFC. Specification says that we need to support this source mapping URL. Itā€™s a comment that can be in JavaScript CSS or WebAssembly and point 2, where a source map lives. It doesnā€™t say how to extract it. We are thinking of mandating people, parse the script and find the comment, but we got feedback for performance reasons, thatā€™s not viable. So NRO put up an RFC with two ways of getting comments out: one with regular expression and one with a full parse. We would love feedback that is interesting to people. Itā€™s linked here. + +JKP: The last thing I think is that we set dates for 2024. It will be June 24th and 25th in Munich. Hosted by Google. We would love people there in person or remote. The themes are adding tests to the tests suite, implementing scopes in tools, and then working together to finalize the text. In September we want to come to plenary and look for approval on that. If you are interested in attending, please let me know or join the matrix room and thatā€™s all for my update. Thanks very much + +SYG: Hi. I had a question about the test suite. You mentioned a few different things like DevTools. And tooling. Are there different subs? + +JPK: I think it'll be the latter. Right now we focus on browser dev. Firefox and WebKit and one coming from chrome. In a sense we have not gotten there yet. We donā€™t have any tool-specific ones. We will end up with 3 suites with shared tests between them. + +SYG: In that case, if I may recommend, I donā€™t know if this is realistic, the browser vendors and the JS VM teams have existing externally maintained test suites to run. The least amount of friction would be that if youā€™re going to introduce, multiple new suites, that land them in the ones that require JS to run can be done as 262, itā€™s not part of 262, just like 402 exist in Test262 for the ones DevTools, I am not sure if an ā€“ this is a good opportunity to part one. That is a core one for things that require a browser shelf. Thereā€™s no additional stuff that it has to set up. + +JKP: Yeah. I think thatā€™s great. Would you mind if I followed up with you off-line about some of the specifics? That sounds good and the type of feedback we are looking for right now. + +SYG: Sure. Please do. + +## TG5: Experiments in Programming Language Standardization + +Presenter: Mikhail Barash (MBH) + +- [slides](https://docs.google.com/presentation/d/1VNMBDlKZJNHJqh76oxiGXbANT9Oq4AYhtMTomudp4GU/edit?usp=sharing) + +MBH: Yeah. All right. Hello, everyone. This is a short update. At the last plenary meeting we got consensus to form the task group , and the co-conveners are YSV and myself. In the end of March, we had the first meeting. We had 8 participants, representing the companies mentioned on the slide. We introduced the idea behind TG5, and discussed planned areas of investigation , as well as responsible research practices. In terms of cadence, we plan to have meetings on the last Wednesday of every month. And we have alternating time slots so we accommodate the US, Europe and Asia. So the next meeting is Wednesday 24th of April. All of this is already in the calendar. We also have a TG5 repository, TG5 team and a Matrix room. Thank you, Chris, for facilitation ā€“ we will give a presentation at the Standards Group of the OpenJS Foundation at the end of the month about the current and planned activities of TG5. + +MBH: We also intend to arrange TG5 workshops which will be colocated with the hybrid meetings of TC39. We see this as an important step for building an academic community around TC39. So the first workshop we will colocate with the plenary meeting in Finland in June. The plenary starts on the 11th of June and on the 10th, we have a workshop, hosted in the city of Turku, two hours by train from Helsinki and the schedule is so that it is still possible to attend that community event on Monday in Helsinki. And I have just opened a reflector issue with almost all the details there. + +MBH: And we (YSV, MF, and myself) are currently preparing the TG5 charter document and will make it available as soon as itā€™s ready. Thatā€™s it from me. + +USA: Thanks, Mikhail. Also, thank you for, as you mentioned, the meeting timings, making them as ā€“ sorry, as inclusive as possible. Itā€™s great, happy to hear about TG5. + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +CDA: Yeah. Very briefly. CoC committee, meeting regularly. Two issues. One dealt with and concluded and another new one reported and we will work through that as per our process. Other than that, just a reminder, we are always looking for new individuals who would like to join the code of conduct committee. If you are interested, please reach out to someone on the code of conduct committee. Thank you. + +## ā€œarray lastā€ proposal withdrawn + +Presenter: Jordan Harband (JHD) + +JHD: So this is just a notification. The champion withdrew that proposal because `Array.prototype.at` is already in the language, and they donā€™t see the need to continue it. So that has been done and the proposal repo is updated. Thatā€™s all. + +### Conclusion + +- The ā€˜array.lastā€™ proposal has been withdrawn + +## TC39 website - call for translators + +Presenter: Chris de Almeida (CDA) + +- [gh issue](https://github.com/tc39/tc39.github.io/issues/406) + +CDA: The TC39 website is translated into the languages that you see here. There was a change to the ā€“ one of the menus. We are in need of help from the community for translations. JWK opened a PR for Hans, simplified, so thank you JWK. We are still in need of German, French and Russian. We are also always welcoming brand new translations, but the immediate need is for the ones that you see here. Thank you. + +## Temporal normative bugfix + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-temporal) +- [slides](http://ptomato.name/talks/tc39-2024-04/) + +PFC: (Slide 1) My name is Philip Chimento. I am going to be presenting a short update on the Temporal proposal. I am a delegate for Igalia. This work was done in partnership with Bloomberg. + +PFC: (Slide 2) So first a short progress update. I know you are used to hearing this, but we are approaching the finish line of the proposal. Currently, the proposal champions are focussing on making sure that all of the in-progress implementations are successful. So what we have been doing recently is fixing bugs found by implementations. I will present one later in the presentation. We are making targeted changes to make things easier for implementations and addressing concerns that are not specific to any feature like the code size. I will give more detail about this in one of the following slides. + +PFC: (Slide 3) If you are an implementation of the language, we would like your help. We want to make sure that any doubts or blockers are addressed before the next plenary in June 2024, to make sure there are no further obstacles to implementation. So if something is preventing you from implementing Temporal, let us know. We would like to work with you to resolve it very soon. So donā€™t wait. If we need to make changes to the proposal, we want to make them now and present them in June. If you want to talk about something or ask questions, we have a meeting biweekly on Thursdays, 8 oā€™clock a.m. Pacific Time. If that time doesnā€™t work, let me know and we can set up another time to talk. We already have some people working on implementations, who actually join regularly to get the chance to ask any questions that they have. For example, we have somebody working on the Temporal implementation from Boa that joins every time, and we have somebody working on a polyfill implementation and joins regularly and this has been helpful both for the implementations and for the proposal itself. Among other things, itā€™s how we discovered the bug that I am presenting a bugfix for. + +PFC: (Slide 4) A short summary about concerns that have been raised. Several we discussed in the hallway discussions in San Diego during the February plenary. So there are concerns about the compiler binary size on Android on V8. We investigated why it's taking so much space, you can read more details on the issue, but we made a proof of concept showing how to reduce that size, without necessarily changing anything from the proposal. We heard from JavaScriptCore that there are concerns about the growth of the standard library, but not specifically about Temporal. + +PFC: (Slide 5) We heard from SpiderMonkey concerns about how this affects the installer size for Firefox. I would be interested to know more about this. And maybe do a similar investigation to find out where the size increase is coming from. We have heard from V8, concerns about the complexity of the proposal. And we heard from Adam Shaw, the polyfill implementer that I mentioned before, who has been going over the duration arithmetic and found some issues. So in response to the concerns about the complexity, we are considering what we could drop or reduce the functionality of. One thing that we are talking about is user-defined calendars and time zones , and the associated classes and/or the subclassing. In response to the duration arithmetic bugs, we are considering whether we could drop the `relativeTo` parameter in the add and subtract methods of Duration. These are things that are actively under discussion. So I am not making any proposals right now. As I said before, we are open to suggestions and want to hear from you if you have opinions about this. It helps us if concerns can be made specific. + +PFC: (Slide 6) So that said, I will move on to presenting the normative change that we would like to ask for consensus on today. (Slide 7) That is an edge case, in rounding `ZonedDateTime`. If you round to the nearest day, it was possible in rare cases if you were dealing with a daylight saving time change, that an extra day was added. You can see this code sample that would have exhibited the bug. And what the correct and incorrect results are. I would like to once again thank Adam Shaw for discovering this bug. You can click through to the pull request if you would like to see exactly how the fix works. There is a test262 PR pending to add coverage for this case. + +PFC: (Slide 8) That was it for what I wanted to present. Are there any questions before we move on to asking for consensus on the normative change + +USA: Yeah. First on the queue we have DLM. + +DLM: Hi. First, I wanted to say that I support the normative fix. Overall we are all with the direction that Temporal is going. We have been staying fairly current with the editorial changes. Thanks to the hard work. We did have some things about installer size, but those have been resolved. We spoke with the product managers for desktop and products we are in the clear. That being said, I think we are not asking for a reduction in complexity, we are not sad to use the user classes. Thatā€™s it. Thank you + +PFC: Thanks. Thatā€™s good information. Thank you very much. + +SYG: Yeah. Some more color to the complexity reduction request. This is not ā€“ I donā€™t think we have done a super thorough job reviewing, myself certainly not, being a non-expert in the space to call out things to cut. The user customization is an easy thing for me to point out as a possibility, given that I am not familiar with the space. And it seems like a much more niche use case to customize aspects of your daytime and calendar handling. If the champions are willing to reduce complexity there, V8 and Chrome will take any reduction in complexity that we can get. There is the code size concern and I want to thank you Philip for doing the investigative prototyping there. But there is also just the ongoing maintenance concern that here is the thing that is very likely to get written once and then let go, and that V8 is maintaining the libraries in perpetuity. And the fewer knobs it has the better chances to be well maintained in the future. Itā€™s a pretty broad, high-level concern. And given its size today, any kind of reduction in code complexity, in code size is welcome. + +PFC: Okay, thanks. Thatā€™s good information as well. + +PFC: All right. (Slide 9) I would like to request consensus on this pull request linked here that fixes the rounding bug that I described. + +USA: All right. Letā€™s give it a minute. I would also reiterate that any sort of statements of explicit support are also welcome. All right. There doesnā€™t seem to be anything in the queue. So you have consensus. + +PFC: Okay. Thanks. (Slide 10) I took the liberty of writing a proposed summary for the note, which I will show here and paste into the notes. + +### Speaker's Summary of Key Points + +- Consensus was reached on a normative change to fix a bug in rounding that occurred in rare cases having to do with DST. +- Over the next few weeks, we plan to dig into remaining concerns from TC39 delegates, particularly with the goal of reducing complexity. +- Follow the checklist in [#2628](https://github.com/tc39/proposal-temporal/issues/2628) for updates. + +## Duplicate named capture groups for stage 4 + +Presenter: Kevin Gibbons (KG) + +- [PR](https://github.com/tc39/ecma262/pull/2721) + +KG: Okay. So duplicate named capture groups. As a reminder, since itā€™s been a couple of years since I presented this, this is a feature that allows you to have the same capturing group name into two parts of a regular expression, with the constraint that they canā€™t both participate in the match, which is to say they have to be in different alternatives. So separated by a pipe. But otherwise, it works exactly like you would expect. You can use back references to the capturing group name. The.groups object of the result will contain the value from whichever one actually matched. In the case of repetition groups the last repetition group defines it, the same way it works for regular capturing groups. + +KG: The specification text is quite simple, although itā€™s not been reviewed by all of the other editors yet. That is, itā€™s not been reviewed as a pull request. But of course the specification text is approved as part of getting to Stage 3. It has been shipping in Safari for a while, and shipping in Chrome in 125, which isnā€™t stable yet. I believe itā€™s currently in the dev channel. Since this is only causing syntax to become legal which wasnā€™t previously legal, there isnā€™t much risk of web incompatibility. SpiderMonkey uses V8's RegExp engine as the underlying thing for their engine. They have to do a little bit of integration work to expose the new functionality. And I believe that work is underway. So itā€™s not yet shipping in Firefox. + +KG: I believe those are all of the requirements for Stage 4. I would like to ask for consensus for this proposal. Is there anything on the queue? + +DLM: We support this for Stage 4 and our implementation is in progress. + +SYG: Yeah. Looks good to me. Chances of this being reverted in Chrome are very, very low. So we donā€™t need to wait. + +MM: I support. + +USA: Great. Thereā€™s also statements of support by WH and DE. I think you have overwhelming support, Kevin. This is probably the most statements of expressed support that I have seen. Great work. + +KG: Okay. I will take that as Stage 4 then. Thanks very much. + +USA: I have a more exciting proposal for you. Would you like to try the second one in 7 minutes? + +KG: Yeah. Letā€™s do it. + +### Speaker's Summary of Key Points + +- Proposal is shipping in Safari and Chrome and underway in Firefox + +### Conclusion + +- Stage 4 + +## Set methods for stage 4 + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-set-methods) +- [PR](https://github.com/tc39/ecma262/pull/3306) + +KG: All right. So set methods for Stage 4. + +KG: This proposal is the pull request is open and passing CI. Itā€™s approved by one of the other editors. And as well as have review from `jmdyck, who is an external contributor who is thorough about catching certain issues, and his feedback has been incorporated. There's nothing relevant to implementers. This proposal has been Stage 3 for quite a while. It was blocked on tests for a while and tests were landed and that unblocked implementation and shipping. + +KG: Again, it has been shipping in Safari since 17, since September. Chrome since 122, which was a month or so ago. I forget. But Chrome is stable. And I know that Firefox has an implementation, but I donā€™t believe that they have flipped the switch to ship it yet. But the pull request is open and approved by one of the editors. And shipping in two major implementations. Those are the Stage 4 requirements and I would like to ask for consensus on Stage 4. We have talked about this recently, so I wonā€™t go through it in much detail, but to recap, this is adding 7 different methods to Set.prototype: union, intersection, difference, symmetricDifference, isSubsetOf, isSupersetOf, isDisjointFrom. + +USA: Great. You have two statements of support on the queue already. Three now. + +DLM: Yes. It is completely implemented and it has not been shipped in release yet. I hope to do this this week. Get the work done, that is; shipping will be a few weeks later. + +Also +1s from MM, WH, LGH, JHD. + +KG: Okay. Thanks all for the explicit support. And for implementations and so forth. Thatā€™s all I got. + +### Speaker's Summary of Key Points + +- Proposal is shipping in Safari and Chrome and underway in Firefox + +### Conclusion + +- Stage 4 achieved + +## Joint-iteration: confirm our stance on issue 1 + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-joint-iteration) +- [issue](https://github.com/tc39/proposal-joint-iteration/issues/1#issuecomment-1981587641) + +MF: So issue number 1 was presented at the last meeting when I was presenting the joint iteration proposal. Was it the last meeting? It might have been the meeting before. I am not sure. We had talked about all the open issues, one being number 1. And the issue is asking whether we should have a joint iteration facility on arrays, as well as iterators. I am not particularly opposed to this, but I am also not really interested in pursuing such a facility on arrays because I donā€™t think it provides as many obvious benefits as it does on iterators which are harder to coordinate iteration of. This was asked by JHD. I asked at the meeting whether anybody thought we should do this. Nobody spoke up in favor and one person spoke against it, that person being GCL. I took that as committee feedback to not pursue arrays in this proposal. Or not ā€“ but as a separate proposal, but JHD,understood that differently. That we may be just delaying that. But I am looking to advance joint iteration to Stage 2.7, at the next meeting. I feel it would be ready for it. And this will need to be resolved one way or the other before that happens. + +MF: So I just wanted to see if there was anybody who had strong opinions in either direction on this. So that we can hopefully move joint iteration forward. Not during this meeting; at the next meeting. Anybody in the queue? + +JHD: I am just adding some color here. So if the committee in general feels that itā€™s best to do it separately, thatā€™s fine. But pretty much every design decision made for iteration seems like it would constrain design decisions for arrays, such that there would be very little to talk about. It would be a process overhead and time delay to do it as a separate proposal. Thatā€™s fine - what is a few months in the lifespan of JavaScript? But, you know, I would essentially be just duplicating what is in your proposal and then writing a bunch of text and making a repo and stuff. So I can do that, if thatā€™s what the committee thinks. It just seems like a waste of time for me, and for the committee. But if itā€™s decided to do them together, then I am happy to do whatever work is needed to contribute to this proposal, so that MF doesnā€™t have to have additional burden for something heā€™s not particularly interested in, including spec text and tests and whatever. Thatā€™s intended as less a carrot, than a lack of a stick. And I see MM is asking what value does it add? Thereā€™s been a number of folks commenting in various proposals and spaces over time that iterators are slow, itā€™s ideal to avoid them. Some people have brought experience from other languages that itā€™s best to ā€“ they prefer the clarity of using a simple list format, whether thatā€™s an array or whatever, over using a full iteration thing. This benefit is lesser in languages design with iteration with first class primitive from the beginning. In matrix, in one of the languages, sometimes they prefer just using a straight up list. + +JHD: I like to use iterators and the iterator helpers would use this version of the proposal, when I am doing multiple operations together. I would prefer to work with arrays and if this is only an iterator helper, then what I will be doing is either writing my own function or using the helper and immediately converting it back into an array, which adds a lot of performance overhead. Thatā€™s the value, that arrays and iterators, itā€™s nice when they have similar operations. And I can use them in similar ways. And then if I want to refactor in either way, it's relatively trivial to do that. + +MM: So letā€™s make sure I understand. So the main motivation here is performance. Performance aside, is there remaining motivation for this? + +JHD: Yes, and simplicity. I personally find (and I have seen this expressed by others so I am not completely alone) often itā€™s simpler to think about and reason about a static array of stuff and transform that, versus kind of a stream-like approach where youā€™re chaining a bunch of transformations. Even though the effect is the same, the mental model is a bit different. + +MM: Let me directly go to my general concern with such things is, the psychological size of the language. Psychological size in terms of cognitive burden on programmers. The argument here would be that thereā€™s already quite a lot of parallelism, a strong analogy where the iterator helpers came from. So given that we are adding this joint iteration to iteration helpers, it reduces the cognitive burden in the language in a way, not increases it to also to arrays in order to keep the parallelism of the two systems + +JHD: Yeah. In general, that is also my philosophy, that similar things should have similar operations, even if on their own, we might not have added it to one of the things. + +MM: Given ā€“ I value cognitive burden over, you know, length of spec text or implementation complexity, so this sounds like a good rationale all around. I am in favor. + +CDA: All right. We have less than 2 minutes left. SYG is next. + +SYG: I want to ā€“ this is like one of those mechanical things that may be contra the goal of reducing cognitive burden. I donā€™t think I have a strong opinion whether joint iteration ought to be added to arrays or not. But we have had multiple web incompatibilities to the extent that we are not very interested in adding the array prototype methods. So if we were to add these, I want to clarify is the thinking we add these as a static method on the array constructor and if so, does that help the cognitive burden thing or makes it a little bit worse? Because itā€™s unlike the other array prototype methods. + +SYG: The first part of the question was to MF. Is the plan ā€“ I guess your plan is to not add these to arrays. So maybe the hypothetical doesnā€™t help. + +JHD: My plan would be to give the feedback about prototype methods on arrays in general, I would probably just use them as statics. To me, the placement isn't as important as the presence of the operation - and especially with editor hovers and type hinting and things like that, I donā€™t think it will make much of a difference. + +SYG: You can also do the to and from array via an iterator intermediate. So if the goal ā€“ so that goes into my second topic, if we tease apart the performance motivation the convenience add or the cognitive burden reduction goal, Mark, what are your thoughts on the fact that you can already, for any iterator operation, you can go to and from arrays. We have static helpers for that. The affordance is there today. + +MM: Yeah. I think ā€“ I think, therefore, I would not think itā€™s a problem, if we omitted it. I wouldnā€™t warn it. But in general, are there other iterator helper methods that exist on iterators that donā€™t have a parallel directly on arrays? + +MF: Yeah. There are many iterator helper follow-up proposals that are adding things that are not on arrays. The minimum set we arrived at in the original iterator helpers was defined that way because they were things that were easiest to get through as a bundle. Everything else was going to be pursued individually one by one. Because they mirrored the array methods, they were the obvious set. + +MM: So if the expectation is that iterator helpers will over time grow methods that are not parallel functionality available to arrays, then the symmetry is already broken, or expect it to be broken, so omitting this one would fit into the broken symmetry. Either way, what that says is the cognitive burden is a wash. It doesnā€™t particularly argue for it. And I will defer to others on the motivation, purely concerned about the cognitive burden motivation. + +SYG: I am done with my items. + +CDA: We are technically past time. MF, if you want to take a look at the queue, and maybe we can get through quickly, we can give a couple more minutes. + +MF: I am fine with just capturing the queue for now. And I think we donā€™t necessarily need to resolve this during this call. I just wanted it to be resolved before the next meeting. If we have the eyes on it, like the attention of the people who care, please continue the discussion in number 1, I think we can resolve this just fine. I had 2 points to do a wrap up with I guess, one was, if we add this to arrays, I would like to not set a precedent that every iterator helper method we pursue needs an array parallel. And the second point I wanted to make was ā€¦ Iā€™ve lost it now. Apologies. I should have written it down. Yeah. Please continue on number 1. Unless anybody wants to make a ā€“ point of order right now to continue the queue. If nobody objects, I will capture the queue for myself and share it within the matrix then so we donā€™t take up more time than we need. + +KG: I would like to get back to this if we have more time we need to fill later. + +MF: Yeah. I will request the chair, we will add an extension item, if we have free time later. + +## Promise.try for Stage 2.7 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-promise-try) +- no slides + +JHD: So, Iā€™m talking about `Promise.try`. I was hoping to ask for 2.7 during this meeting. One of the reviewers has not confirmed, but the other reviewer has, as well as all of the editors. Thereā€™s only one open question to resolve: do we pass arguments to the function? So specifically, we have this pull request that is relatively small, if you ignore the generated output. It adds argument forwarding. The proposal on main takes a call back. And calls it with no arguments. This pull request, which was requested by a number of delegates just also forwards any additional arguments provided to promise.try to the call back. Thereā€™s no other change. So my hope is to get consensus for 2.7, with this pull request. At which point, I will merge it and begin work on the tests. + +CDA: Mark? + +MM: So the addition of the arguments, I like that. But if thatā€™s there, wouldnā€™t people equally expect it on patch and even then? Isnā€™t ā€“ so once again, a cognitive burden thing. And letā€™s leave `then` aside. Is it something we can actually add to `catch`? + +JHD: I mean, the difference with `catch` and `then`, I think, is that those are callbacks added to an existing Promise. Youā€™re already in a promise pipeline using then/catch/finally. + +MM: I see. + +JHD: Whereas this is when you are creating a Promise pipeline, or entering one. + +MM: Okay + +JHD: And so I agree ā€“ on their surface, they seem similar. But I think that the parallel between then, catch and finally API versus syntactic is more important than `Promise.try` vs then/catch/finally, even if they werenā€™t conceptually distinct, which I think they are. + +MM: Okay. I accept that. That seems like a good rationale. + +JHD: Thank you + +MM: The other question I had is, is promise.try equivalent to wrapping the block with an async IFFE? + +JHD: Yes. (types it out) + +MM: Given the symmetry with async IIFE, explain why itā€™s worth adding try rather that be just encouraging people to use async IIFE. + +JHD: Sure. This was discussed during Stage 1 and about 2 discussions. But essentially, the first part is that if youā€™re supporting older environments, syntax is more expensive to transpile. But the other thing; that question was the reason why this proposal was stuck in Stage 1 for like 9 years. This receives 44 billion downloads. There is empirical evidence that the functional form is preferred to using or desired for many over just using an async IFFE. I have a suggestive aesthetic opinion, using an invoke function is messy and with worlds with modules they are obsolete and I prefer to keep it that way. That is subjective and nobody has to agree with that. This is saying that the package and the functionality it provides is ā€“ I am trying to obviate it. + +MM: I like the empirical evidence. I have no objection. + +JHD: Thank you. + +KG: This was a response to the thing MM brought up earlier. I want to make sure itā€™s clear that this is different from then and catch, in that those are callback-taking methods. But this is more a generic function invoking method. Anything that takes a specific function of a specific form, like function prototype call or function prototype apply, which is a way of invoking a function, makes sense to forward arguments. Things which expect a specific form, like catch, it makes less sense. + +SYG: I want to clarify something I didnā€™t understand about the older environment argument. Where JHD said, the syntax transpilation could be expensive. So the situation is that there is an older environment that does not have async await, where you have to transpile away async await. But they have the new `Promise.try` method, which is a new standardized thing? I donā€™t understand that + +JHD: The `Promise.try` is polyfillable. Async is not. I mean, it doesnā€™t have to be installed in the environment. It can be a function. `Promise.try` is a very tiny subset of what an async function can support. And so like to ā€“ if thatā€™s [T-T] only issue, certainly you could write a static analysis, transformation, that tries to determine when someone is using an immediately invoke async function for this purpose and replace it with that function. In practice, that doesnā€™t exist. + +SYG: I see. Specifically, the concern is that ā€“ okay. If itā€™s for users that are ā€“ in source, pre transpile source, writing modern JS, but targeting old environments, so old they donā€™t have async await, native support, for those users if we standardize `Promise.try`, you can ship than the await transpilation. Is that the correct understanding + +JHD: Thatā€™s what I meant. I donā€™t think thatā€™s a primary motivation for the proposal. Thatā€™s a side benefit for those of us who do that sort of thing. The primary motivation itā€™s clearer of what I am trying to do than any form of any invoke function is. + +CDA: KG? + +KG: Yeah. I am fine with the main motivation of this. The polyfill-ability one, I am confused by. Like, you could have a package that did that. If you are not ā€“ + +JHD: Right. Youā€™re right. And I think me mentioning that, caused more confusion. + +KG: Okay. + +JHD: Yeah. Polyfillability is not a motivation we have ever agreed on as a committee as something that motivates design decisions or justifies the inclusion of anything, and I am not doing that here. + +KG: Thatā€™s all I wanted to establish. + +DRR: I mean, I think, you know, one of the arguments here is, is clarity. And as ā€“ I really donā€™t know if I am totally sold on the use case. But if we are, and the whole goal is like clarity, try really sounds like it has something to do with exceptions in some capacity with promises + +JHD: It does. + +DRR: Yeah. I mean, it is, butā€¦ + +JHD: The specific case this is trying to make ergonomic is when a function throws a synchronous exception. + +DRR: So it co-ops that. You canā€™t say something like Promise-Resolve with calling the function itself + +JHD: You canā€™t because it throws the exception. You have to wrap in a promise catch or whatever. + +DRR: Okay. Got you. This is effectively doing. Calling the function, and then wrapping that in a try catch and rejecting instead + +DRR: Fair. Yeah. I wish it was something like adapt + +JHD: I am not attached to the name. Itā€™s like you can see the other users, itā€™s always been called try. No, thereā€™s an `attempt` in there, an `fcall` (but I donā€™t think `fcall` is anything anyone would support). `attempt` is an interesting alternative, but it only appeared once in the list and even that library still has `try`. + +DRR: Okay. Got you. All right. + +JHD: Given that, I would like to ask for consensus for 2.7 with that pull request merged, that forwards arguments? + +CDA: You have a + 1 from MM. + +CDA: Do we have any other voices explicitly supporting promise.try for 2.7? DE? + +DE: So in the discussion, we heard a number of people being sort of vaguely wondering about the motivation. I guess thatā€™s how I feel about this proposal as well. It doesnā€™t seem bad, but itā€™s not something I would personally reach for. I wonder if we should do something like a temperature check to understand how well motivated people in the committee feel this proposal is. I know thatā€™s not usually the way we use temperature checks, but I am a little bit concerned about the, you know, proportion of skepticism versus explicit support. + +JHD: I mean, if we feel thatā€™s appropriate, we can certainly do that. But that would be I think the question to ask when going for Stage 2. Stage 2 approval is committees approves the motivation + +DE: Sure. But this is quite common in proposing things for Stage 2.7. If the proposals that I got to Stage 2 didnā€™t require more motivation after Stage 2, it would be a lot less work. Yeah. This is why we have these conservative defaults in committee that we are requiring this repeated consensus, to make sure + +JHD: Yeah. I mean, I think ā€“ when we certainly go through the exercise if we feel itā€™s a good use of committee time. But the ā€“ if thereā€™s no negatives and some positives, and a significant amount of user-land evidence that itā€™s desired, that seems pretty straightforward to me. + +DE: So I wanted ā€“ leave it up to you, whether you want to allow the temperature check. If you think itā€™s inappropriate here, then letā€™s not do it. + +CDA: Letā€™s go to MM on the queue. + +MM: Yeah. I am fine with doing a temperature check. Before we do the temperature check, I want to add a cognitive burden argument in favor of this proposal. Promises have `catch` and `finally`. So people looking at that would naturally look for `Promise.try` and I think itā€™s less surprising for it to be present in a way that works, that chains well with `catch` and `finally` than it would be to be absent. + +JHD: Thank you. I agree with that. As the champion, itā€™s self-serving, to disallow the temperature check if offered, and I am not trying to be self-serving. If folks think itā€™s a good use of committee time, we can certainly do it. I am not seeing any indication that itā€™s a good use of committee time. But I want to defer to the room on that. + +MM: Letā€™s do a temperature check. + +JHD: Okay. Letā€™s do it + +CDA: Letā€™s define exactly what are the parameters, what do the different choices signify? + +DE: Maybe the temperature check is on? Does this proposal seem useful to you? I think the strongly positive to unconvinced/confused spectrum is kind of perfect for that sort of question. What do you think? JHD? Thatā€™s like the question + +JHD: Yeah. If anyone has a more negative sentiment, jump on the queue and stress it. Otherwise, the default labels on the emojis are sufficient. + +DE: "Does the proposal seem useful to you?" + +KG: Can we clarify. Are we saying does this proposal seem to you to be useful, or does it seem to be useful to you? Because there are lots of things I personally am never going to use, but sure, it seems useful. + +DE: Okay. Seems to you that to be useful is weaker in all senses. Yeah. + +JHD: Useful to somebody, in other words. + +CDA: Okay. Is that clear for everyone? Is it not clear to anyone? + +DRR: Restate it once more, please. + +DE: Does the proposal seem to you, to be useful generally? + +DRR: I think that seems clear + +CDA: The temperature check interface is now visible. I guess we will give it till another minute and a half or until there is no movement. + +NRO: My position doesnā€™t seem bad, but I am not convinced it is useful in general. Other people with my position are 'indifferent', how is that different from what it means. + +JHD: Like youā€™re ā€“ you donā€™t youā€™re not convinced by the arguments + +NRO: I am not convinced by the popularity because itā€™s like ā€“ like I canā€™t see the benefit of the packages and it seems like the position that others mentioned. Like, this is not bad, they donā€™t see it being useful. I just put "indifferent" as DE expressed the same position about it. + +JHD: Yeah. I mean, itā€™s more like, do you think itā€™s useful to sufficient people? Obviously people are saying this is useful for me and you canā€™t invalidate that. Itā€™s obviously youā€™vesful to somebody. But so yeah. I mean, I donā€™t know. I think either one is fine. + +CDA: All right. I think we have gotten all the votes we are going ā€“ we just had a new one show up. It appears that the totals basically are 6 positive and 10 negative. + +DE: Indifferent isnā€™t quite negative. I think itā€™s best to come back with a little bit more evidence for this. Obviously, the vote doesnā€™t come to any sort of conclusion itself. But thatā€™s what I would recommend to the champion + +JHD: Yeah. I mean, I think my argument is complete. Like, I donā€™t know what additional evidence I could provide. And I mean I thought I had made that presentation when it achieved Stage 2. So I am not clear on what value that would add. So if someone wants to withhold consensus for 2.7, for me to do that, thatā€™s fine. But I donā€™t really have a ā€“ I would need a concrete call to action to what to bring back because it seems that that is there + +CDA: We are almost out of time. MM is on the queue + +MM: Yeah. I am indifferent to abstain, not negative. That is there to be the negative. Based on this you should call for consensus right now. + +JHD: Then yeah, I will repeat that consensus for 2.7? + +MM: I support. + +WH: I support. + +MM: (on queue) Support + +BSH: I support + +CDA: A + 1 from TKP. Okay. Are there any opposed? Are there any who are not explicitly opposed but would like to state some dissenting views for the record? + +DE: I kind of want to dissent from Markā€™s interpretation of indifferent as abstain. You can abstain if you abstain. I voted indifferent to mean what Nicolo said, but I am not objecting to consensus. + +MM: Itā€™s not in front of us anymore, but I donā€™t remember there being a choice to abstain. + +DE: You can just not vote. + +MM: I did not mean abstain. I voted for the reason same as Nicolo, which he just explained + +JHD: To be clear, to me itā€™s a weak negative and not someone is not willing to block consensus on it, but they wouldnā€™t vote for it. + +DE: Right. Yeah. I think that matches. + +CDA: Also they had multiple opportunities if they were going to block, but they still can. They can blunt out right now. Hearing nothing, and then of course Danā€™s dissenting view on the meaning of the temperature check is recorded for the notes. + +DE: A few people typing this chat. Maybe... let them share their thoughts. + +CDA: We are past time. So unless it rises to the level of blocking consensus, we are going to move on. Okay. `Promise.try` 2.7 congratulations + +JHD: Thank you + +### Speaker's Summary of Key Points + +Some hesitation about motivation; a number of people are unconvinced of the utility - but nobody objected, and multiple members are convinced of the utility. + +### Conclusion + +Promise.try has stage 2.7. + +## `RegExp.escape` for stage 2.7 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-regex-escaping) +- no slides + +JHD: All right. So now we have regular expressions escaping. This one has ā€“ does have all of the open questions resolved, including the hex escaping we discussed in the previous meeting. All the reviewsers and the editors have signed off. As a review, what RegExp.escape does is the same thing, it takes a string and escapes it so that you can make a regular expression with it. And it will do what you expect. + +JHD: The escaping it does, now, is much more thorough and verbose than in the past. As a result, the output is much ā€“ is not just much safer, but itā€™s in theory, safe, to create a regular expression. Even if you can concatenate the string with something else, that has meaning, inside regular expressions. So I am requesting consensus for Stage 2.7. + +MM: So we reviewed this in Agoric. RGN who is not available today, but my understanding from what he explained when we reviewed this is that there were ā€“ you know, the original agreement about how safe this was that allowed it to go forward is that except for the even/odd backslash, it was safe in essentially all contexts. What I understood from Richard was that if thereā€™s a ā€“ if thereā€™s an additional cape, backslash before the first character, then that property is restored and without that, there was an exception to that property, a particular context that could be confused. I can look that up. But do you know what RGNRGN is talking about from previous feedback from him? + +JHD: I mean, heā€™s filed a number of issues that have all been resolved. I am assuming that thatā€™s what they are. If not, I am not aware of it. + +MM: When we went over this, just very recently, like in the last day or two, the feedback from him was that this issue was not resolved. Is the first ā€“ does Reg regular escape put a backslash before the first character of string being escaped? + +JHD: I donā€™t think it does that unconditionally, no. I am pulling the spec up now. [inaudible] + +MM: Okay. I need to look up the ā€“ it will take me a moment to get the ā€“ + +JHD: I mean, the meeting is ā€“ we still have a few more days this week. I would be content with Stage 2.7 conditional on this issue either being resolved or a non-issue. I will bring it back later this week. + +MM: So let me just make sure that weā€™re on the same page. If there is a realistic issue, and if it is solved with an extra backslash somewhere, since we have already given up the readability of the output, would you have a problem with a backslash as necessary to restore the original safety claim? + +JHD: Correct. + +MM: Great. + +KG: I do want to clarify the original safety claim, though. Itā€™s not that using `RegExp.escape` is safe in contexts, but safe in contexts where it doesn't clearly mean something else. If you put `RegExp.escape` immediately after a single backslash, then yeah, the output is not going to match the thing that you put in the `RegExp.escape`. Itā€™s going to do something else. And that's impossible to avoid as far as I am aware. And thereā€™s a couple other place was that property, in particular, off the top of my head and I am not going to claim this as a complete list, but after \x or \c or \u., + +MM: `\c` and `\x` ring the right bell. I donā€™t think RGN was talking about \U. + +KG: U is isomorphic to X here. + +MM: In that case, itā€™s probably included. Does an extra backslash solve `\c` and `\x` and `\u`? + +KG: So itā€™s not an extra \ per se. The solution to those is that if the first character is an ASCII letter then you escape it with a hex sequence the same way that if the first character is an ASCII digit you escape with a hex escape, which we needed to do because of backreferences. After \1, you need the output of `RegExp.escape` to not be interpolated as that being a part of the escape sequence. We didn't do that for X, C and U, because thereā€™s no reason to have \X followed by the output `RegExp.escape` because that clearly is not going to do anything sensible. But I believe escaping ASCII letters are sufficient, I'll have to look up exactly what characters can occur after \C to confirm that. And to be clear, nothing this ā€“ nothing can possibly solve the issue of putting it after a \ by itself. So this will only apply to those three. + +MM: Yeah the original agreement gave up on even versus odd backslash. That was understood. It was just that I didnā€™t want there to be any other contexts and at the time, we had the agreement, we believed there were no other contexts. + +KG: We did mention X and C in the original presentation, to be clear. I mean, if you are saying that this is an issue, thatā€™s fine. + +MM: Okay. So I believe you. I just donā€™t remember that. And in any case, since we have given up on readability, I donā€™t see any reason not to do this, if there is a solution that gives us safety and strictly more context + +KG: I am not opposed to doing this, but I donā€™t want to phrase this as giving up on readability. We have decided that we are making tradeoffs around readability that favor, for example, not having to change the grammar of regular expressions, which is a fine tradeoff to make. And we could decide we want to make the tradeoff to favor the ability to use this after \X, C, or U over being able to read the output, if the output is a bunch of ASCII letters. Thatā€™s a tradeoff we could choose to make, but I would not phrase it as giving up on readability. + +MM: Okay. That is the side of the tradeoff that I would strongly prefer. I think that safety dominates and as far as I am concerned, I have given up on readability. + +KG: Fair enough. + +CDA: We have 10 minutes left. Waldemar is next + +WH: A couple items. One is about safety, one is about the recent change to make escapes less readable. The proposal makes the argument that itā€™s safe because it escapes all whitespace and newlines, as stated in the safety explainer. But the proposal does not escape newlines. So I donā€™t understand what the intent is. + +KG: That is surely an oversight. The purpose of escaping whitespace, to be clear, is that there is a proposal for X mode RegExp which allows whitespace to be ignored unless you did `\+whitespace or whatever, which can improve readability. You wonā€™t use new lines in literals but can use them with the RegExp constructor, as you already can. So the intention was to escape everything that was usable there. Okay. So, sorry, WH, this is my fault. I had it in my head that whitespace included line terminators. This should include the JavaScript LineTerminator characters, so CR, LF, and then the two paragraph separator and line separator unicode code points. + +WH: Okay. Following up on this topic, there was also a GitHub issue about surrogate handling and what happens if more Unicode characters get added to the whitespace category. But there is a safety concern, which wasnā€™t addressed, in that if there is a whitespace non-BMP unicode character and somebody dribbles it to RegExp.escape one code unit at a time, it wonā€™t recognize the unpaired surrogates as whitespace. Then they concatenate them and those become whitespace which is unescaped. + +KG: That is true. The obvious fix for that is to say that we also escape unpaired surrogates. + +WH: Yes. + +KG: I donā€™t see any reason not to do that. + +JHD: That would be a trivial spec text change. + +WH: Okay. So that covers my safety point. My readability point is that I donā€™t have ASCII codes memorized. And I would much prefer IdentityEscapes or more readable escapes to escaping via `\x` with ASCII codes since itā€™s much easier to understand the output and it doesnā€™t affect safety in any way. + +KG: To be clear, for some of the items in this list - for example dash, you canā€™t escape dash outside of a CharacterClass. Are you suggesting that we modify the RegExp grammar so that \ā€“ is legal or only the ones that already have escapes? + +WH: You canā€™t escape dash outside of a CharacterClass? + +KG: In a U mode Regex. + +WH: OK. I am not suggesting we modify RegExp grammar in any way, but I am suggesting, for the characters for which the IdentityEscape exists and is uncontroversial, that we use it. I would prefer `\n` instead of `\x0A` for line break and I would much prefer `\.` to `\x2E` and `\` instead of `\x20`. + +KG: `\`+space has the same problem that itā€™s not currently legal in u-mode RegExps. + +WH: Okay. + +KG: But there is a subset that is legal. + +WH: Yeah. For the things which are currently legal, I would prefer to use those. But I am not asking us to change the RegExp grammar. + +KG: As the person who originally was trying to get us to change the RegExp grammar to use all of these, I am happy to recover what readability we can for the subset that is feasible. + +WH: Okay. + +JHD: Same. + +JHD: To summarize, it sounds like the additional changes that I should attempt to make sure, one is that unpaired surrogates should be escaped. Another is that we should attempt to restore readability for new lines and perhaps a list of other characters whichever characters we see fit that also are legal in both U mode and non-U mode RegExps. So like `\n` instead of the text code for it. And potentially an additional change that Mark was referring to with the first character in the string. And with those three changes, I would then come back at a future meeting, not this one, because thatā€™s too much change for me to be comfortable trying to shoot from the hip on, to request 2.7 with those changes. Does that sound like an accurate summary? + +MM: Except for the "potentially". I am asking for that + +JHD: Okay. Yeah. If we can get an issue filed for that, MM that would be helpful. But that is included + +WH: I would like to push back against MM's request for escaping the first character if itā€™s a letter. I donā€™t understand the rationale for it. If youā€™re escaping user inputs, the context theyā€™re placed into should be a valid regular expression on its own. You shouldnā€™t be concatenating `\x3` followed by user ā€” + +MM: That was ā€“ I mean that was exactly why I was opposed to this entire proposal in the first place. And was insisting on a template tag that could do context-dependent escaping and deal with the backslash even odd problem. The thing that convinced me to go forward is the understanding that the only context that remained problematic was even or odd backslash. And RegExp are sufficiently complex and have sufficiently large surface areas, large number of features, if itā€™s anything more than just even or odd, it just drops out of memory. + +KG: For `\x` followed by `RegExp.escape`, I just donā€™t think someone is going to try to do that and expect any particular behavior. + +MM: I think that having a simple to state safety property is a very, very important aspect of having a safety property. + +MF: Kevin can you clarify why you think `\x` is different from `\0`? + +KG: `\0` is a totally reasonable thing to write. You are expecting to match a null CodePoint followed by some user input. + +MF: And \x is an IdentityEscape for x. + +KG No. No one is writing that. + +MF But itā€™s valid. + +KG In non-U RegExp, thatā€™s true, but no one is writing that. + +MF: Okay. + +MM: got clarification from RGN. The issue is, issue number 66. Heā€™s not able to be online, but he let me know he sent me a link to issue number 66. + +JHD: Okay. This one is currently closed. So if I need to reopen it, and thatā€™s fine. + +MM: Have some clarification from ā€“ further clarification from Richard, my interpretation is yes, you need to reopen it. + +JHD: Okay. Will do. + +MM: Wednesday and Thursday, Richard should be back. + +CDA: Okay. We are just about at time. + +JHD: And that conclusion from before, I will adapt that into a summary in the end, in the notes. No advancement today. I will come back at a future meeting to request 2.7 again. Thank you. + +### Conclusion + +- Will make additional changes and return in a future meeting: + - unpaired surrogates should be escaped + - we should attempt to restore readability for newlines and perhaps a list of other characters, whichever characters we see fit, that also are legal in both U mode and non-U mode RegExps (eg `\n` instead of the hex code for it) + - an additional change from MM/RGN with the first character in the string. + +## Make eval-introduced global vars redeclarable for stage 2.7 + +Presenter: Shu-yu Guo (SYG) + +- [proposal](https://github.com/tc39/proposal-redeclarable-global-eval-vars) +- [slides](https://docs.google.com/presentation/d/1p--DB6SNlDv5XOn9g4bmwoymYQ93VWK_RDrCHLJJd60/) + +SYG: Okay. This is literally the same slide deck from last time. So I will quickly go over it again. Itā€™s a recap for folks not here last time, but the content and the normative changes I am asking for are basically unchanged. + +So thereā€™s this thing. In the spec, thereā€™s a slot on the global scope basically called VarNames. And what is this thing? In general, in the language, we disallow LexicalBindings beings like and using bindings and var bindings to share the same name in the same scope. We throw a redeclaration error when you have conflicting var versus LexicalBinding names. This is generally true in all the scopes except the global scope, which is special because 1: it is an open scope, meaning there is no syntactic you can do to close the global scope. In the HTML embedding, you can open script tags and add more declarations. The special thing about the global scope var binds, you can get property descriptors and access them ā€“ they are properties, literally. + +SYG: So what do we do when we try to extend the general thing of disallowing LexicalBinding names in the same scope with var binding names when extended to the global scope? Something like this is disallowed. This seems good. If you have a `var x` in one script and `let x` in the other, with the same name, those conflict, the name X conflicts. So we disallowed that - fine. We also disallowed this. If you have a non-configurable global property named X, we also have that set of names conflict with LexicalBindings. Fine. We also make special pains to disallow this. Which is that if you have sloppy direct eval at the top level, sloppy direct eval is allowed to introduce new var bindings, into the enclosing ā€“ callerā€™s var scope basically. So in the first script tag the caller is the global scope which means that eval introduces X as a global var binding. And because itā€™s a global var binding and we want that to conflict with LexicalBindings, we also disallowed this currently. This seems like good motivation. This seems fine, a fine thing to disallow. Except how do you implement this? + +SYG: So the quick detour is first remember the direct eval var semantics. These are in all contexts. With some special upshots for the global context. When you have a direct sloppy eval and introduces a var binding, the binding that it introduces is delete-able. So it is configurable in the property descriptor sense. But in general, it is delete-able. Even if you introduce a var at a function scope, that var is also delete-able. When you are at the global scope, this adds a property to globalThis like every other global var. The upshot here is that as a configurable var to globalThis, but wait you can manually add a configurable property to `globalThis` and we donā€™t disallow this case. If you manually add this, you are allowed to shadow the configurable global property with a lexical binding. Which means that in order to disallow this case, this snippet, but to allow this snippet, it means that we need to introduce a new kind of thing to track specially the global properties that were introduced via var. + +SYG: And that is what 'VarNames' is. It is a list of names on the global environment, the purpose is to introduce a direct eval var from ordinary configurable vars. Ordinary vars, not used, do not to be tracked via VarNames because those are non-configurable properties. So my claim is that this is - knowing the implementation complexity, who are the use cases here? Are there use cases? This was a question I put out to the committee last time. You shouldnā€™t use sloppy direct eval to introduce them. Please donā€™t. Thatā€™s terrible. And you can already redeclare them, but you have to delete them first. The extra thing we introduced to ā€“ there are three cases where we check for name conflict between lexical and VarNames and then throw SyntaxError... Declare a let or const with a like named let or const. Number 1 is conflicts between lexical and other lexical. Number 2 is conflicts between lexical and var. Var introduced syntactically, not direct eval. Because these syntactically are not configurable. This has the consequence that things that you define configurable, like a lot of top level functions, maybe things we put on the global scope that non-configurable catches in those case. 3 is the special rule that is for the direct sloppy eval. And my proposal is to remove number 3. And the upshot of removal is that this is now allowed. `let x` will shadow the eval introduced `var x`, exactly like if you typed `globalThis.x = whatever`. + +SYG: And the update is that this be moved to a proposal, so it could move through the stages normally like a proposal, instead of as a consensus needed PR. That is done. And they would have liked some time to consider the ramifications here. I believe this is a web compatible change because itā€™s moving from a redeclaration error to a non-error. But you also shouldnā€™t be doing this + +SYG: With that, I will take the queue questions, if any, before asking for Stage 2.7 + +MM: so, first of all, I want to say, thank you for having moved it to a proposal. I read it carefully. This was a great presentation. You presented all the issues very clearly. I am quite in favor of this going forward. But I would like to hear WHā€™s opinion on this as he has investigated a lot of thinking of global scopes versus the global lexical environment conflict between var and let and all that. Waldemar? + +WH: I havenā€™t had time to look at this in detail. I donā€™t have an opinion. + +MM: Okay. I am in favor. Since you havenā€™t had time, are you willing to let this go forward to 2.7 based on the presentation? + +WH: Yes. + +DLM: (on queue) supports 2.7. + +CDA: Nothing else in the queue right now. + +SYG: Okay. If the queue is drained, then I would like to officially ask for Stage 2.7 consensus. + +CDA: All right. Support from DLM + +KG: Support. + +MM: Support. + +SYG: I am going to take that as good. To be perfectly clear, this will require engine changes in all engines, I believe. It's 2.7 because there are some tests, to test the current behavior, which is opposite of my proposed behavior, I plan to remove those tests before the end of this committee and come back for Stage 3 because thatā€™s basically the ā€“ I will update those tests. Now I have 2.7, I will come back for Stage 3 at the end with like a 2 minute item, an FYI, if thereā€™s time for that. Thanks. + +### Conclusion + +- Stage 2.7 + +## ESM Source Phase status update and layering change + +Presenter: Guy Bedford (GB) + +- [proposal](https://github.com/tc39/proposal-esm-phase-imports) +- [slides](https://docs.google.com/presentation/d/1iM5cRgdRXLWLq_GxgRvzYmUTXEK6gzH_8QNgLKMmv7o/) + +GB: This was ā€“ I wanted to give an update on the proposal that got to Stage 1 last time, which was the ECMAScript module phase imports. So while we have phase imports for the source phase proposal, that represents the actual source phase. This represents it as a phase for JavaScript itself. Thereā€™s this concept if I wrote a module in the module system. We had implementation feedback on the source phase imports from SYG on Friday, which I will get to at the end of the presentation. So it kind of ties into a lot of the concepts and so yeah. We can go to the next slide. + +GB: So to just go back to the use case that we are seeking to solve with the portability for JavaScript modules, this problem that we have identified for worker instantiation in JavaScript environments where you can pass an arbitrary string. Itā€™s a path, but itā€™s relative to the base, not the current module. Itā€™s not really a Moddable pattern. Itā€™s difficult to have things work everywhere and thereā€™s these frictions. So we identify this as a problem that would be useful to solve with phase import for ECMAScript modules And this is exactly what we solved for WebAssembly modules through source imports, which we now have through the same module system you can gain access to not just an instance, but the compiled module to create multiple instances and to create instances with different values and WebAssembly. Different import values. Static analyzable. You can see where using WebAssembly modules and integrates well with CSP policies. So they will work towards having sort of ā€“ itā€™s not just one policy we can actually associate the policies with. And so the idea is that we get a lot of those same benefits through defining the phrase for JavaScript modules. Because if you can import a module and treat itā€™s a capability for that module, and in a lot of ways it represents the key. All the same benefits, tooling support for workers where tools can see when a worker is being used, itā€™s easier to see what modules are referenced and when a module is referenced, you can do a relocation and reference to the bundled version instead. + +GB: The security argument doesnā€™t hold those. This is one of the things that came out of the discussion. Worker on the work is a policy, while the other is script source. And so there is something that would need to happen for this worker integration which is a CSP refinement of the CSP policy. And the idea here is that in the integration, because the worker source might be more refined than script source, you need to reexamine the URL that that module originally came from. And verify that itā€™s still processed as the policy and throws the error at the time you try to create the worker. The way this works, all implementations today have the URL in the host defined metadata on the module record. We basically just define that we pick up the URL again out of the host defined field, and this is HTML integration. And reverify. I wanted to flag it because itā€™s a really interesting and important property that needs to be maintained. And we have identified through further discussion. + +GB: So one of the big questions we had, which phrase are we specifying? There was discussion as to whether it should be the instance or the source phase, and thereā€™s tradeoffs. 'Instance' represents a graph of modules linked together, whereas a source phrase represents a single compiled ModuleSource before itā€™s being linked. So itā€™s the compiled module. When you think about things like transfer, moving green capacitors and agents, transfer within instances is a complex thing to think about because what does it mean to share graphs? To share states? Share instant state and errors and things like that. Source is much more immutable to transfer, it deals with the transfer problems. Though are gating, like the worker case, where you check CSP policy. One does have to be aware but itā€™s simpler than instances. At the same time we already have specified a source phase. For WebAssembly.module, so we build on that and create instanceof that without having to start from scratch. In addition, the design of instances is mostly constrained to the loader use case. You think about linking, think about host hooks, think about membranes, wherewith the source, itā€™s a building block that is supported in loaders, but doesnā€™t touch on the loader problems. + +GB: For that reason, we have made the decision to go forward with the source phase for now, which is what we are designing for in specifying a concrete ModuleSource that extends AbstractModuleSource. The source already associates the registry key per the phasing model. So when you have a ModuleSource through the Wasm integration that host defined information on it already has the URL, ready has the information about the registry key. Thatā€™s how implementations use it And so we effectively build on that to be able to support the ā€“ I should have reordered these too. Basically, to support dynamic import of sources as well. This is what makes it useful beyond the use case. If we make it dynamically importable, we layer with the use case for module expressions and module declarations So thatā€™s the direction we have decided to go for now. And we are going to follow up with some further design work. In addition, we want to specify the reflection API on the built- in objects to have imports and exports to get the imports and exports of the module. And these could go on the AbstractModuleSource. So they apply to any ModuleSources because they are cycling module records ideally. But thatā€™s an open question, to some extent. + +GB: So for dynamic import, I can import a source. The source represents that kind of capability to import the module in a lot of ways, the source phase extends from the key in the loader, so you import the key. Thereā€™s no state associated with it. We go from the module to its key and from its key to the import. What that means is that you get the same instance every time you pass the same ModuleSource into the dynamic import. But a different instance, depending on what context youā€™re in. So I will get to that later in loaders integration. It acts like a capability of the module and itā€™s checked. The CSP check has already happened. You have module object, you have the capability to use it. Unless youā€™re passing to refine the CSP policy. + +GB: So the same will effectively works with WebAssembly modules. Import a source and treat that as a capability for its import as if you had imported without the source. Whether using these would be dynamically portable is very much up to the host integration. In some ways, if youā€™re running on an unsafe Wasm URL, you have a weak CSP policy. From a security perspective, it might make sense to support ā€“ it requires defining the key at the time of construction. So thatā€™s a question for the actual effectively other specification and integration to determine, itā€™s still an open question for now. But the default if you create it, it wouldnā€™t be able to define the key + +GB: Import and exports have an initial design, based on providing some standard kind of analysis information about what imports there are and attributes and what phase they're in, and the names of the exports and star exports and re-exports. This is just initial design, but weā€™re starting work + +GB: The feedback we have gotten so far: YSV posted an issue. One thing, the new worker would be able to apply directly to the source and because we know it is a module worker, we could exclude the need to provide the `{type: "module"}` object which you would normally provide as a second argument into the new worker constructor. The argument here is that there is a readability benefit in maintaining the type module object because you can tell by looking at the code, if you are creating a module worker or a script worker. + +GB: So I think one of the main things we identified here is that when you construct a worker, with a ModuleSource, you know it has to be a module worker. So there is no type module, we should have an early error and not try to load that as a script. So it will definitely be an early error. Whether we leave in the type module or not is an open question. And I think that will just have to be part of the integration discussion, but it could go either way still. So it might have the typed module aspect in it. + +GB: In the instance layering, we presented the SES group. Sorry. The new TG group determines, it layers well with instances. There are some compartment-related questions. Because when you dynamically import a ModuleSource itā€™s associated with the registry key which is effectively the URL and its attributes. Whatever the URL is in the environment. And dynamic import has different meaning or context in different compartments because the same source could have a different meaning. The simple example to think of is, passing a source object across a compartment and what you expect to get. We should definitely expect to get the representation of the same module in the loader. There are questions to address and terms ā€“ these are things to bounce around and avoid for a long time. But they are starting to be top of mind when we do this work. + +GB: So just to update on the layering, by going with the source phasing imports design, we will be ā€“ seeing something like this. Previously we had two branches for instance and source. I think source phase will allow us to layer with module expressions and module declarations. The specifications should become very direct specifications on top of the base. And then module instance would effectively move inside of loaders. So specifying module instance would be a process of loader specification. So that is how that layering updates. + +GB: So the imports layering question, that should be brought up to give some of the background on this briefly. In the ā€“ and in terms of when we get to discussion, I would like to say, if we can first discuss the layering question and then all of the other design questions around source phase in general, just to use our time well. + +GB: But to give the background on that and dive into that discussion. We are allowing non-ECMA262 objects to be provided through this source phase. And when we brought that up, at plenary, the argument from Jordan was that we are having non- ā€“ the sort of objects that are host defined being returned through the module system. And so we should have some kind of way to bless them or know that they are ā€“ they have certain properties at least and Jordan said, there should be a strong branding check that we can have a two string function that cannot be forged, that ā€“ and this was deemed to satisfy that binding question. And the way it worked was, we say that these objects extend from an AbstractModuleSource, and you canā€™t do that in user-lands and hosts can then set the slot and then you know that you have got something that can only be deemed a ModuleSource object. Even though it is a host defined object and not a ECMA262 object. + +And then we could potentially by setting that initial groundwork, we could make sure that we are supporting the properties we need from ECMA262 perspective. + +GB: SYGā€™s argument, on the other hand, is that internal slots pose implementation difficulty, because host-defined objects usually donā€™t have ECMA262 internal slots. And so instead, the idea is that we should use a host hook to define this interaction. What we have got is, a new host took to basically the same way. Host get module. In place of that internal slot. Itā€™s not normative. Itā€™s straightforward layering chain but it affects the layering and the WebAssembly as an integration layering in particular which is going to WasmCG tomorrow, to be able to get a phrase 3 vote on that. So this layering change is critical to that process as well. + +GB: So the PR we have up for the layering adjustment is, adding the new host hook and updating the toStringTag to call the host took and if itā€™s not one of host objects, that the host decides is a ModuleSource object, it will return undefined. Itā€™s the same behavior. But the new layering. So that PR is up on the source ā€“ original Stage 3 source phase import proposal. PR 62. And I would like to then go into the discussion on that with Shu and make sure we have got everything clarified and go into the wider discussion. So letā€™s take a look at the queue. + +SYG: So I want to give some background for the motivation for it a bit more. So the implementation difficulty is not an impossibility kind of thing. Itā€™s also not a ā€“ thereā€™s like two folds I think. Somewhat of a difficulty and somewhat of a future proofing, mainability argument. And of the argument basically is we donā€™t, today, have subclassing as a way to cross the host boundary. Today we donā€™t say in order to embed JS into something, HTML, whatever, that one of the ways the host can hook into things is to provide proper subclasses of JS things. Defined in 262. + +By requiring slot checks, that basically means that at the spec level, at least, providing real subclasses becomes one of the ways to cross the host boundary and that is not something I want to ā€“ thatā€™s not why I want JS spec to be layered. Thereā€™s unintended consequences I thought through. I feel it could have unintended consequences if we assume things that the host provides can in fact be real subclasses. So the argument is that today, for this proposal, it is ā€“ as you have pointed out correctly, itā€™s an editorial change. I think that extends in general to like for anything I want to be expressed as a slot, you could do a host hook. At least locally. What I am worried about is that if we donā€™t express it as a host hook, there are non-local things that are hard to miss in review for future proposals and future editions that assume a proper subclass and you do something in (?) giving rise to difficulties down the road we have to reason to. If you directly express it as the host having to provide objects that behave a certain way and you can check if it behaves a certain way via the host hooks and you make explicit at the host level, that better reflects reality than saying, hereā€™s a slot. Check it has to be done at the spec level, but you can implement this as a host hook, if you really want because these are observation equivalent. Figuring out whether itā€™s equivalent gets harder and harder as more and more behavior gets hung off of the slots. So I would like to not do that as a spec thing. But I think you are correct, itā€™s strictly editorial. I would welcome feedback from other web implementers here on the subclassing question because I would like it as a precedent that we keep the spec host boundary to be exclusive host hooks with the constraints we put on them. + +GB: Clarifying points, if thatā€™s okay. On the question of internal slots, to be clear there are no longer any internal slots with the change. It would only be host hooks. + +And in terms of proper subclassing, the requirement is that we state the host hook that gets the ModuleSource object to begin with when you do the source phase import, the only requirement we state there is that the object should have its prototype as the abstract module prototype. So itā€™s a requirement on the object, but thatā€™s the only requirement on the object. I guess I am not understanding your concern about the proper subclassing. Are you concerned that behaviors of the subclass might not carry through, through the prototype chain somehow? We gave the example of imports and exports being an open question, whether they exist as a reflection on the ModuleSource object for JS only, or whether they exist so WebAssembly would have the same API. Thatā€™s the benefit we get out of this, by specifying that prototype. But now itā€™s just a minimal prototype that has nothing on it. I would be interested to hear what the proper subclassing you have a question about + +SYG: Letā€™s take NRO's question. It sounds like perhaps the same question. + +NRO: Like, can you like ā€“ what do you mean my subclassing here? Because like I understood the problems about the internal slot and not level the chain. There are other cases where the product chain crosses the language ā€“ like the spec. + +SYG: I mean, specifically, that there are hosts ā€“ I donā€™t mean prototype. I will leave it at that. I mean, I would not say there are not host provided objects that must have some special internal slots that are not otherwise present unlike ordinary objects. Basically. So in this case, previously, where the current spec text draft has this ModuleSource name slot, I think, but this is ā€“ but the host providing the different kinds of modules is an expected extension point by the host which means that a naive literal implementation of the text, they create objects of that have particular slot. By subclassessing, it must have the particular internal slots that are not already present in ordinary objects. And that can be implemented via host hooks. But it is editorially clear and I think itā€™s easier to think about, if we explicitly spec them as host hooks instead of as internal slots. Does that make sense? I can give a more concrete example in an actual implementation, if that helps. + +NRO: Yes. Thank you for clarifying. + +JWK: Did SYG already answer my question? + +SYG: Reading your question, I donā€™t think I did + +JWK: If we are never going to cross the host boundary by adding new methods on the AbstractModuleSource.prototype, I think we should remove the `AbstractModuleSource.prototype` entirely. + +SYG: So let me try to answer that. I think there is still value in having `AbstractModuleSource` prototype and the root cause, there is at least notions of subclassing in JavaScript. Do I have something on my prototype chain? Kind of subclassing. This is more akin to duck typing. Conforming to an interface. Itā€™s available to have an instanceof something and take as an affordance by checking if a certain prototype is on its prototype chain to mean it behaves like the prototype suggests it should. That is a separate notion than representationally, like the layout of the instance. Is it a subclass of another class? This notion, the representational notion is more about things like internal slots and private names and stuff like that, where you can have through return override, for example, make a representational subclass of an object, by like calling the super constructor to install its internal slots and private names on to the instance and later change the prototype chain to something else. Like unfortunately, mechanically, these are separate in JavaScript. I am solely concerned about the representational thing. The behavorial thing of like "is this conforming to an interface", we use prototype chains for that and I think that is very valuable to have. If the source phase imports is supposed to then the host defined objects they should behave and look like they behave like other things vended through source phase inputs. But I donā€™t want to constrain that the representation must literally be a subclass of the thing we defined in 262. + +GB: Thatā€™s a really good point, I think to separate the concept of the prototype from the layout. And if we can just ā€“ because at the moment, our primary contention here is around the layouts. I mean, itā€™s worth noticing that the proposal is at Stage 3, normative changes require, you know, agreement at this point. So from an implementation feedback perspective, weā€™re not defining anything about layout here. But what we are defining is like the spirit of the specification here is that the object you get back, we should be able to get from that object basically to the underlying registry key. However, thatā€™s done. So there is some kind of key hash or a URL and module attributes or whatever, there should be some way to make that and an association with the compiled artifacts in the case of JavaScript, however the JavaScript is made. I can speak to the implementation design space further than that, what the intention is, but maybe we can have a discussion further to clarify what the V8 embedding looks like there. Specifically, if we can focus on this PR from the specification point of view, are there any reasons you think we have to be concerned about landing this fix at this point in time? Do you think we should bring this back to committee? Do you feel thereā€™s more design work that needs to be done? + +SYG: I am convinced at this point, that it is strictly speaking an editorial change, but I believe itā€™s an important editorial change to set editorial precedents. So we donā€™t accidentally make unintentional normative changes in the future. So strictly procedurally speaking, I donā€™t think we need to ask for consensus here. But given the motivation, which is to prevent an accident change in the future might be good to get affirmation from other browser vendors. I donā€™t think anything strictly needs consensus. + +JWK: I want to make sure I understand SYG correctly. So you mean, we should keep it for programmer to test like instanceof, but we will never add a method to it because that requires the module to have some internal slots. Is that correct? + +SYG: I mean something weaker. It's fine to add methods. + +JWK: But to add a method to be useful, you need access something internally. Right? + +SYG: Right. And the internal access ā€“ the question is editorially, should the internal access be done by host hook or internal slot and my argument we should do it by a host hook instead... defined in 262 via a subclass. + +JWK: Okay. I understand. Thanks. + +CDA: We are past time. Guy? Any final thoughts? Anything you want to record for the notes? + +GB: Sure. If we can just note that the editorial change is moving from a slot, internal slot model to a host hook model, which, yeah, I think thatā€™s it. + +### Speaker's Summary of Key Points + +- the editorial change is moving from a slot, internal slot model to a host hook model + +### Conclusion + +- Proposal remains Stage 1 + +## Atomics.microwait() (without mini wait) for stage 2 + +Presenter: Shu-yu Guo (SYG) + +- [proposal](https://github.com/syg/proposal-atomics-microwait) +- [slides](https://docs.google.com/presentation/d/1Sb4Qaa5F8ZM9X0kxv5e-Wh5CbT1ZoryeCtVShoVhu8Y/) + +SYG: So this is a continuation of something I presented last time at reduced scope. Last time, I had presented something I was calling micro and mini waits in JS. Since then, done thinking and think the most bang for the buck in the short term is to drop what I was calling the mini waits, which I will go over. This is the same exact slide deck as last time. The motivation is that we add these atomics, low level atomics stuff to help like Emscripten to write better locks. Itā€™s important because the glue boundary between JS and Wasm and emscripten resides is the system boundary. They implement things like LibC and pthreads. So that Wasm can compile like a C++ application for C application. Under the hood use the pthreads and that works. And that means that you have to write locks. So how do you usually write a lock? The usual way to write a lock nowadays is to have a fast path and a slow path in the acquisition. You want it to be fast when uncontended. Itā€™s faster to not sleep your thread and to occupy the core with the spin lock if you leave that unlocking is imminent or that nobody else is holding the lock. If you design your application, that thereā€™s not a lot of contention for some resource, you want this path to be very fast. Otherwise, if you need to like sleep a little bit, thatā€™s a syscall and wait for it to wake up. You are adding slow downs to your application for no reason if you believe most of the time the block is uncontended. So you want there to be a fast path when contention is slow. If the problem is, if you do a spin lock naively, this has undesirable and unintended consequences on the CPUs. CPUs really like to be hinted - I'm calling X86 out here, ARM does better. But they like to be hinted that you are doing a spin lock. So that they donā€™t have to relinquish the core it assessment so that it is easier that CPU and the caches to load the value you're trying to acquire. The upshot is that if you donā€™t hint the CPU, you get worse performances and schedules. All you see here, locks and stuff like this, or lock free lock. If you do a spin lock, you call something that yields the CPU. In the loop of the spin lock itself. Thereā€™s an intrinsic called `_mm_pause` thatā€™s available in a lot of C compilers. There's the yield instruction on ARM. Thatā€™s the point of this instruction, it exists to hint the CPU. It has no observable effects except hinting the CPU. There is no observable behavior, exempt timing. The intention is that this waits for a short amount of time, hundreds of CPU cycles. And there is an iteration number that gets passed. Because it is a common best practice to do some kind of exponential back off so that you spin for too long. And should you choose to implement that, it helps to hint the microwait method itself with what iteration number you are at so you donā€™t wait as long and the constraint is that microwait of N waits at most as long as microwait of N + 1. And thereā€™s this other thing of the slow path which is you want to be efficient with when the lock is contended. When someone else is holding the lock for what you think is a good while, you donā€™t want to spin the CPU. That will pin the core to do nothing useful. It will increase battery consumption and increase power consumption. So you want to be efficient when you know you are not going to get the lock. The way to do this in native code is to put the thread to sleep. + +The problem with this in JS is that we canā€™t put the main thread to sleep. We can put worker threads to sleep with atomics style wait. But thereā€™s a policy decision made that we donā€™t ever block the main thread for responsiveness. And I had previously proposed that we let you clamp the time out, so you are allowed to block it for a bounded amount of time. I am dropping that from this proposal because I donā€™t know a good way to do it. The previous proposal that I had proposed hand waved away a lot of details. Specifically, thereā€™s a complicated policy space in HTML around how much time is desirable to allow blocking, if at all. Thereā€™s notions of something called an idle period that I was hoping to use, but it turns out it was probably never going to be available in any meaningful amount that itā€™s going to cause immediate timeouts anyway. So the short story is that I donā€™t know a good way to do this on the main thread. I am going to drop it for now. I donā€™t think the value to figuring that out is ā€“ thereā€™s a good pay off in the short term. For the microwait thing, itā€™s a thing that improves Emscripten efficiency, in the short term. So the reiterate this is basically for Emscripten, itā€™s a very narrow use case thing. On paper, itā€™s for anybody who is writing locks and lock-free code. In practice, I recognize thatā€™s a small code of JavaScript. Emscripten is large, and someone who compiles this C++, they are using P threads and using Emscripten I believe implementation in JS of some very low level calls that enable P threads, namely futex. + +In practice, Emscripten reach is wide. Number of developers very small. Affected developers very large. + +SYG: So I am asking for Stage 2 for the proposal of reduced ā€“ this of proposal for the reduced scope of microwait. No more waiting on the main thread nor some clamped amount of time. I will take questions from the queue. + +MM: Okay. It looks like I am first. Just have some questions and clarifications. The most ā€“ shared array buffer is currently effectively optional in that itā€™s appearance on the global object is optional that was intended to make sure the array buffer was optional. And we didnā€™t make atomics optional, but it was sort of under the understanding that atomics is useful if shared array buffer is not present. This does not use a shared array buffer as an argument. So you could engage in a microwait. However, the concern would be not whether you can cause a delay of a given amount, but whether it enables a program to measure duration. And I donā€™t see a way this can be used to measure duration. But I want clarification on that. I also wanted your feedback on the idea of making shared array buffers and atomics jointly actually normative optional. + +SYG: So first part, I think I do not think it enables measurement of a high ā€“ it doesnā€™t enable writing a high-res timer, any more than writing a user function and then doing like performance.now before and after to calculate delta. I donā€™t think it gives you any more power than that, which is something you can already do. + +MM: Let me clarify my question. If youā€™re in an environment in which all other means of measuring duration, such as date.now and anything else, including indirect measurements through indeterminism, have been denied, so in other words, a shared array buffer does give you an indirect ability to measure duration. If thereā€™s no other other way to measure duration, but atomics is still there without shared array buffer, with this microwait, that does not take a shared array bufferā€™s argument, it seems like it could cause a delay of an amount that you specify, which is fine. It does not seem like it enables the measurement of duration by itself. Is that correct? + +SYG: That is my understanding. Yes. + +MM: Okay. Good. In that case, I have no objection, but would like your feedback on whether ā€“ I mean, itā€™s tangentially related, but since it raises the issue of atomics not being useful ā€“ or not being completely without functionality, in the absence of shared array buffer, what is your feelings of making atomics and shared array buffer actually jointly normative optional? + +SYG: so my feedback is, I donā€™t think we can. And I will clarify. I say I donā€™t think we can because while the shared array buffer constructor can be removed, or is removed in certain contexts, on the web, at least, the thing that we block, the ability to do, is to communicate the shared array buffer to different threads, not the ability to create a shared array buffer. So concretely, even if the shared array buffer constructor is not present, you are still able to create a shared array buffer via a shared Wasm memory. In an environment where communicating that shared memory is still denied. So if you post message that memory you get an error. But not when you try to create the shared array buffer. And `Atomics` also already work on non-shared array buffers and the reasoning for that was, applications that want to ship one copy of their binary compiled for shared array buffers, can progressively degrade to what a regular array buffer in contexts where shared memory is turned off and they donā€™t have to recompile and ship a completely different binary that donā€™t use atomics. They can get normal array buffers except for waits, which is disallowed because that will immediately deadlock yes + +MM: That was very clarifying. I didnā€™t realize that. Let me just clarify, atomics on a non-shared array buffer is completely harmless with regards to all my concerns, it sounds like you confirmed that. And I am happy to clarify my concerns and we can take this off-line, if this is too much, on this topic. The other thing is, with with regard to the other way of attaining a shared array buffer, I am concerned about enabling, conforming JavaScript implementations that donā€™t have access to Wasm, donā€™t have access to concurrency, I would like to enable a sequential JavaScript implementation to be one that conforms to the spec, and that would ā€“ and that has an effect on what we begin as normative optional. Absent from the global, it does not suffice for another reason, which is we have got in progress the getIntrinsic proposal and according to a precise reading of the current spec, shared array buffer could be argued to be a hidden intrinsic, which is revealed by getting intrinsics and that is contrary to the intention when we made the global itself to be optional. + +SYG: We should take this off-line. Letā€™s take this off-line. I want to drain the queue through the rest, but I want to confirm your first question, my understanding is yes. + +MM: Good. Thank you. + +CDA: We are at time. + +SYG: So I wanted to ask for consensus. Can I ask folks in the queue are those material on consensus and if so, can I ask for a 5-minute extension? + +MM: I am happy with 5-minute extension and I do not object to consensus. + +SYG: Waldemar? + +WH: I wasnā€™t able to find any spec text for this. Is there one? + +SYG: Youā€™re absolutely right. I completely dropped the ball in writing spec text and put it on the agenda. I am not really eligible for Stage 2 here. But ā€“ no. I just forgot. But the answer to your question on the semantics from the spec point of view, it will do nothing. It says return undefined and thereā€™s an implementation ā€“ implementerā€™s note, we expect you to yield the CPU. + +WH: Yes. I like this proposal. I think just the documentation is a bit of out of order. The documentation includes the clamping behavior, and as you said, thatā€™s gone. + +SYG: Completely right. Yeah. + +WH: I will support this once you update ā€” add spec text and correct the documentation. + +SYG: Okay. Given that, I withdraw for Stage 2 consensus. I didnā€™t prepare the spec text. Philip, that sounds like a naming question. Please open an issue and we can deal with that. + +### Conclusion + +- Atomics.microwait withdrawn for Stage 2 consensus, for now. diff --git a/meetings/2024-04/april-09.md b/meetings/2024-04/april-09.md new file mode 100644 index 00000000..76e4210a --- /dev/null +++ b/meetings/2024-04/april-09.md @@ -0,0 +1,1139 @@ +# 9th April 2024 101st TC39 Meeting + +----- + +Delegates: re-use your existing abbreviations! If youā€™re a new delegate and donā€™t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +You can find Abbreviations in delegates.txt + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|-----------------| +| Istvan Sebestyen | IS | Ecma | +| Keith Miller | KM | Apple | +| Ashley Claymore | ACE | Bloomberg | +| Waldemar Horwat | WH | Invited Expert | +| Jesse Alama | JMN | Igalia | +| Linus Groh | LGH | Bloomberg | +| Ron Buckton | RBN | Microsoft | +| John-David Dalton | JDD | OpenJS | +| Ujjwal Sharma | USA | Igalia | +| Ben Allen | BAN | Igalia | +| Daniel Minor | DLM | Mozilla | +| Samina Husain | SHN | Ecma | +| NicolĆ² Ribaudo | NRO | Igalia | +| Bradford Smith | BSH | Google | +| Chris de Almeida | CDA | IBM | +| Jordan Harband | JHD | HeroDevs | +| Philip Chimento | PFC | Igalia | +| Daniel Rosenwasser | DRR | Microsoft | +| Mathieu Hofman | MAH | Agoric | +| Mark Miller | MM | Agoric | +| Eemeli Aro | EAO | Mozilla | +| Duncan MacGregor | DMM | ServiceNow | +| Jack Works | JWK | Sujitech | +| Mikhail Barash | MBH | Univ. of Bergen | + +## Explicit Resource Management Normative Updates and Needs Consensus PRs + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-explicit-resource-management) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkqpkI6V9_w6ykvsG1w?e=ehAC64) + +RBN: Good morning, everyone. I am Ron Buckton from Microsoft. This hopefully will be fairly brief plan to discuss some of the normative updates to explicit resource management and needs consensus PRs that we discussed. The first one, we have been discussing this in the pull request, but as update, there was a potential leak with the DisposableResources AO, had no indication that after a block containing using exited and all of disposables resources have been executed or been disposed, that stack will never been accessible there. Therefore, those resources can be freed. So this [PR](https://github.com/tc39/proposal-explicit-resource-management/pull/194) adds a note as well as an explicit value to disposable resource stack undivided, at this may not require based on discussions. But the basic premise is, once a stack has been disposed, it can never be used again or should be used again. Therefore, itā€™s free to collect the resource part of it. This is considered suggested, this is not necessarily need consensus PR, since implementers are free to free any resource that or any object that isnā€™t actually ever used again. But felt that the note was helpful in ā€“ to indicate to implementers that is intended or preferred behavior rather than to maintain those resources. + +RBN: The second [issue](https://github.com/tc39/proposal-explicit-resource-management/pull/216#issuecomment-2015449095) under discussion and this one requires consensus, currently the disposed AO performs a return of abrupt when calling a captured disposed it. This is the same type of thing in iterator.return, et cetera, when AsyncIterator return so that a synchronous exception that is thrown such as if you are trying to actually construct on demand the AsyncIterator return method, that would throw synchronously rather than asynchronously. And while that is the expected behavior in those cases, right now the asynchronous dispose that is picked up by AsyncDispose declaration isnā€™t wrapped in a PromiseCapability in the same way as for async from sync iterator implementation. As a result, even though we do a await for async depose, that essentially awaits undefined, a dispose will always throw synchronously, therefore wonā€™t be caught asynchronously the way an async would. Therefore, this PR has suggested or proposed that the exceptions thrown synchronously should not send trigger promise rejection. We donā€™t want to be different from how async from the sync iterator works for consistency. So PR 218 now wraps the ā€“ GetDisposeMethod that creates this wrapper function now would create a PromiseCapability. Since this one does require consensus, I would like to ask if thereā€™s any objection or any ā€“ ask for consensus if thereā€™s any observations to this PR. I can open it, if that would be helpful as well + +USA: Yeah. There is nothing on the queue yet. Letā€™s give it a few seconds. Nothing on the queue yet. + +RBN: Is there anyone that can provide explicit support for this change? + +USA: MAH is on the queue. + +MAH: This is a great. I mean, so a throw in the await using should from my understanding, this will result in in await point which is what we wanted. + +RBN: Yeah. That is correct. + +MAH: Support, yes + +RBN: I do want to also clarify, it is still the case, both with async from sync ā€“ AsyncIterators in general, and await using that if the resource async method that happens to throw synchronously, either itā€™s a user implemented depose that returns a promise, or itā€™s a getter that returns a function and those happen to flow, those still will throw synchronously. That is the case for 408 of and ever other `AsyncIterators` how the spec is written. We are consistent with that, although thatā€™s not likely to be often the case since most users writing in AsyncDisposable will write an async function. But this will maintain consistency with how that works. + +RBN: So the [next item](https://github.com/tc39/proposal-explicit-resource-management/pull/219) to discuss that also requires consensus we have been discussing the deterministic collapse of await and the resource is `null` or `undefined`. Now, just if you have a resource available to simplify the process of conditionally accessing that resource or conditional registering that resource we allow `null` and `undefined` as valued values in a using or AwaitUsingDeclaration. A null or undefined value means we donā€™t need to actually run any type of dispose callback. It allows people to return `null` or `undefined` in a resource is not available but donā€™t want to throw instead of having to completely bifurcate sections of code to handle those cases. + +Now, the way that we handle the async leaving point at the request of MM was that any time you see an AwaitUsingDeclaration there is an implicit await that happens barring the very small case of async throw from an async dispose. And how that is currently implemented is that whenever a null or undefined resource is added to a disposable resource stack we will run an await for the null or undefined. That means, just as an indicator we still in await using or await using exist and await needs to happen. The downside of this though if you have multiple nulls, or multiple undefined or a mix thereof, combined with other resources or all null, you await for each individual null operation, even though the requirement was that we ā€“ that code outside that block runs in a separate turn. Therefore this would return 3 turns later when we ideally want to finish this after one turn. Perform single await. So the idea with the deterministic collapse is that we would reduce all of the nulls down to a single await. The PR 219 currently only reduces contiguous runs of null or undefined to a single await. So I think NRO has a topic specifically asked about this as well, is that the way the PR is currently written, if you have null, `x = null`, `y = null`. Those are condensed down to a single await. Then you have other nulls, after that, those get condensed down to a single await. Itā€™s been discussed in the PR that all nulls should condense down to a single await and not that as long as a non-null thing had an await occur. So that nulls become essentially transparent, as long as there is something else that triggered an await, which is the other direction we can go. + +So I will take some additional questions here, since thereā€™s a consideration to consider ā€“ or an option to consider as well. + +NRO: Question. I guess my preference would be to collapse nulls as much as possible, so if there is at least one null, but I would also be fine with what the pull request is currently proposing. + +RBN: I think that is also the current ā€“ the most recent suggestion on this PR was that ā€“ it might be in the issue, I think. There was a suggestion that all of the nulls become transparently ignored. I will sake Tom additional ā€“ I donā€™t know what NRO means. Sorry. + +NRO: MF is saying he agrees with me. + +RBN: All right. I see. Yeah. So unless thereā€™s anyone that prefers the ā€“ the [PR](https://github.com/tc39/proposal-explicit-resource-management/pull/219) currently is handling this, only contiguous runs. I will make that change to collapse all null undefined down to a single await but only if there is no other await that occurs. And as a result, I would like to seek consensus for that change. So looking for either explicit support or anyone that is opposed. + +USA: I see nothing on the queue so far. + +KG: I just want to clarify. That sounds reasonable to me but I want to make sure I understood the proposal correctly. So this is saying that if thereā€™s at least one null, if the ā€“ sorry. If there is any await that happens for reasons other than null, then the nulls donā€™t cause any awaits at all? + +RBN: That is the suggested change to this PR. Yes. + +KG: Okay. In the case that wasnā€™t any actual await, but there was at least one null, so itā€™s not a completely empty set of declarations. Then you perform exactly one await? + +RBN: That would be correct. + +KG: That sounds good to me + +RBN: There is one possible thing to consider there, which is if you have a run of ā€“ letā€™s say you had X = non-null, you would have a run of nulls that donā€™t do anything. And then the last thing that you would dispose would be the first item, and ideally, you will await that result. Again, if it happen to be in the small corner case of a async dispose that throws synchronously, the question; should we still enforce an await, since had these been actual resources we might have expected one await to occur before that exception occurred, or does that end up being completely synchronous and still throws the following body is synchronous? Because this is such a narrow corner case, I am not sure itā€™s something we should be too concerned about. But if we want to maintain consistency, we might still force an await because something might have happened with the other resources or just go with the simpler approach, which is ignore nulls as long as some async dispose or dispose wrapper was invoked that would have triggered an await. + +MM: Sorry. Could you state the corner case again. Iā€™m sorry + +RBN: Yes. The corner case ā€“ maybe I can edit the slide briefly to indicate this. The corner case would be if I hadā€¦ so the corner case would have been this example here, where what will happen is, the ā€“ the first thing that would be awaited ā€“ that we would have awaited had been resources would be Z, awaited async dispose and awaited async dispose. We havenā€™t done await yet. Then thereā€™s the potential that this ā€“ X is async dispose throws synchronously either because async dispose is a getter or the async dispose has synchronous code that executes outside a promise and returns a promise. Those ā€“ this can trigger a synchronous throw that bypasses all of this. And thatā€™s purely because it matches the semantics of how AsyncIterator works. If thereā€™s ā€“ thereā€™s no await that will happen. So the question; would we enforce an await at that point, even though if there was only one resource, only await using X = non-null and no other nulls, a synchronous throw would not result in an await? Do we maintain that behavior, consistent with 408, or do we still enforce an await because there was something like the Y and Z that comes afterwards? + +MM: Okay. My preference is clearly that we would enforce the await because the reason why we are not treating ā€“ why the nulls still cause, in general, still cause at least one await is so that you donā€™t have to reason about conditional and computed data values are in order to know whether thereā€™s an await breaking up the control flow. Since the Z would happen before the non-null, and you donā€™t necessarily know statically that itā€™s null, my preference would still be to enforce the wait. Now, to argue on the other side, the reason why the synchronous throw from the dispose, although I do find it unpleasant, is something that we are willing to live with, is because the throw does force the program into a completely different control flow path. So itā€™s not that youā€™re proceeding forward even with or without an `await`, depending on data. Itā€™s that youā€™re only on the throw control flow path, in the case where thereā€™s no await. So I think I can live with it either way, but I strongly prefer that the `z = null` happening before the `x = nonNull` does force an await. + +RBN: All right. I think thatā€™s perfectly reasonable. I do think the other approach is simpler. But thatā€™s perfectly reasonable and I think thatā€™s perfectly feasible to do with a minor change to the PR. So if thatā€™s the case, what I am seeking consensus is deterministic collapse of all null and undefined in await using that will trigger an await at least trigger await even if there is a synchronous throw from an async dispose for any resource in that block. So if that seems reasonable, then thatā€™s what I will seek consensus for. + +MM: Okay. Good thank you. + +USA: So that was it for the queue. + +RBN: So looking for explicit support. Incompetent I am not sure we got that + +USA: Earlier, there was one by NRO off the queue, but there was one statement of explicit support. + +RBN: Any objections? + +MM: We support. + +RBN: All right. Thank you. The last ā€“ + +SYG: Sorry. Could you reiterate exactly what we got consensus on because of the change from the PR from the contiguous runs? + +RBN: Yes. Rather than contiguous runs, the change; and I will summarize in the notes as well, we have reached ā€“ reaching consensus or seeking consensus on is that all null and undefined resources get collapses down to a single await that only happens in no other await happened in the AwaitUsingDeclaration. So even if non-null throws synchronously when you invoke the await dispose, we would still ā€“ that would not trigger an await. Therefore, we still introduce an await as a result. Does that clarify? + +SYG: Yeah. Thanks. + +USA: WH? + +WH: Letā€™s consider the first example, where *x* and *y* are null and *z* is non-null. Would that now cause two awaits or one? + +RBN: The suggested proposal, if *x* and *y* are null and *z* is non-null, we are seeking consensus that there is a single `await`. + +WH: Okay. And in the case of *x* non-null and *y* and *z* both null, there will be two awaits? + +RBN: As we are discussing, there would still be a single await. It would be like the *x* and the *y* didnā€™t exist, except for the case where non-null throws synchronously. + +WH: I am trying to understand ā€” I am confused about what the solution is now. When running the code, you get to *x* and *y*. Those are null. You donā€™t await. You get to *z*, which is non-null. Whatā€™s the order of operations? When would you do the await in the second case? + +RBN: Two places to check. One is that if async dispose, if you ā€“ when this calls async dispose on non-null, if it throws synchronously, then we havenā€™t awaited. If it does not throw synchronously, an await has occurred. the current PR essentially does this, but itā€™s only ā€“ it breaks things into what comes before and what comes after. + +So again it would branch based on whether we ā€“ whether getting the method and evoking it throws synchronously, that does not trigger an await. Therefore, we need an await to occur because there were other declarations. If it does not throw synchronously and you get a promise result that you can then pass to a PromiseCapability, and then you await, that mark that await does occur. Therefore, we would see ā€“ okay we have all the other things marked as null, await curse, and we no longer need to introduce an await + +WH: Okay. So you catch the synchronous exception, then await because you had null. And then deal with the synchronous exception. Is that right? + +RBN: Essentially, yes. + +WH: Okay. + +RBN: It does feel a bit strange to me, but essentially the idea; we didnā€™t need to introduce an await for Y and Z, to be able to execute non-null, but we need an await in general to try to ensure that the code afterwards barring the very narrow ā€“ not even then. Sorry. So the code that runs all the blocks runs in a later turn. + +WH: Yeah. It seems a little weird to me too. But I donā€™t have a better idea. + +RBN: Yeah. It seems a little weird to me. I would be fine if we ignore the Y and Z and then we would have triggered an await for non-null. If ā€“ the primary requirement is to meet MMā€™s specific requirement of the explicit await + +USA: Next we have Nicolo. + +NRO: I think I understand. Like, but like I will be comfortable with saying with the consensus, but it would be great if you could after the request, reask for consensus, because I ā€“ given that we discussed to make sure people can rate the actual text and check if it matches understanding + +RBN: We do have pretty universal consensus, regardless whether the null comes before or after we should collapse the await and the big question was, whether or not we force an await in that case when async dispose throws synchronously. I can start with the first bit and come back either at the end of plenary since the change is relatively small to make that. And ask again if anyone with concerns has had time to review the change, if that is acceptable, I can do that as well. + +NRO: Yeah. If there was a couple of minutes at the end of the plenary, that would be great. + +RBN: All right. I will see if I can do that then. + +USA: Next we have a clarifying question by John. + +JDD: Hi. I am just making sure that collapsing these awaits is essentially just an optimization. So I guess my point of view is that the behavior around the edge case should be consistent with if the optimization didnā€™t occur, so like the case where we need to have an await with the case where youā€™re throwing synchronously, if that matches like the assumed behavior, thatā€™s great, to me. Even if it ā€“ I donā€™t ā€“ I donā€™t think it is awkward. It sounds like if thatā€™s the design, then that sounds great to me. So I am in favor of any optimization that is applied that still pre serves the expected behavior for it. Soā€¦ thatā€™s it. + +RBN: Yeah. The thereā€™s two points to the optimization. The first is that the PR as it stands right now, which does deterministic collapse of a contiguous run of null and undefined is the optimization. When the optimization falls down, it goes back to what you normally expected. As the transitions interest the optical mace, it would introduce an await as if one occurred. But that introducing excessive awaits when you have null, non-null, null as the resource ā€“ the set of resources. + +RBN: The second bit about optimizations, await isnā€™t exactly optimization, which is in general the thing I would say an engine could do if itā€™s not observable to the runtime. This is observable semantic change, because itā€™s possible to have things that run into different turns and based on null collapse, which turn this thing runs in would be different than if we didnā€™t have null collapse. The main reason why the ā€“ this shouldnā€™t that much after concern, itā€™s a terrible idea to depend on the ā€“ which turn async operations resolve in, thatā€™s one of the classic releases all go type things, you shouldnā€™t write code that depends on which turn something occurs unless I am writing test code for these behaviors. Itā€™s a bad idea to depend on that. I am fine with the deterministic collapse. + +RBN: And that said, if you wanted to try to apply the optimization, you have to trigger the await before the three occurred to be considered an optimization. + +RBN: So I donā€™t know. I am not sure where I stand on considering that. I want to say this is similar to the thing with AsyncIterators, where we tried to remove some excess awaits that occurred due to the promise adoption behavior. And we kind of privileged native promises result so thereā€™s less awaits that occur in certain cases as a result of being in a native promise versus being a promise A+. We are trying to reduce unnecessary awaits that even if youā€™re writing code that doesnā€™t care whether ā€“ what turn the code runs in, that the ā€“ what youā€™re looking for actually is we are not waiting multiple extra milliseconds for the ā€“ for each await that essentially does nothing which is just an artificial slow down to the code. + +So in a sense, optimization, that does make sense. + +JDD: Correct. Thank you. Yes. I considered ā€“ I never even considered trying to account for the ā€“ the specific cycle. An await needed to happen, that was it. As long as an await was expected, I donā€™t ā€“ the number of cycles really doesnā€™t concern me. I donā€™t think people should be tracking that either, so. Yeah. If this is all in aligned with that and similar to the other optimizations, done for for-of, that sounds great. + +USA: Okay. Okay. We had a reply. As you ā€“ okay. Yeah. It looks like weā€™re heading towards time and thereā€™s nothing on the queue. + +RBN: I will move ahead. I will come back later in plenary after I made the change to the PR, NRO, and anything else I want to review. + +RBN: So the other thing that potentially needs consensus here, there is a lookahead restriction we added for using that banned export using ā€“ because it doesnā€™t make any sense. If you export a UsingDeclaration, exporting a resource, by the time you access, itā€™s not available, because itā€™s disposed of. Therefore we banned export using. And that also was designed to cover async version of the proposal when we were still using the await syntax. When I merged, I overlooked with the changes that the switch to the await using order meant that await using was no longer banned as an export and I want to maintain that. So this PR just introduces a ban for export looking ahead not equals await or using. There is no other legal form of export await anything. Therefore, this doesnā€™t get in the way of any other code right now. So this ā€“ a ban for both. So quickly, I will look for consensus or opposition and I can move on to the next one. + +USA: So already thereā€™s three statements of support on the queue from NRO and JDD and WH. Perhaps letā€™s give it a second for others to jump in, if they would like that. + +MM: Support. + +RBN: The [last one](https://github.com/tc39/proposal-explicit-resource-management/pull/220), which also probably does not need consensus, since it has no actual effect on the spec, V8 working on implementation of UsingDeclarations and found that the ā€“ there is a ā€“ there is an early error introduced whenever a UsingDeclaration is used in a place thatā€™s not a block scope. Essentially, this is banning its use this script. And the idea here is that they have to be immediately within a block scope. They donā€™t want to be within a script. Itā€™s hard to write that with the way itā€™s organized. If a UsingDeclaration isnā€™t nested within one of the specific things that introduce a block scope, and one of the things that introduces scope is ClassBody. You canā€™t write a UsingDeclaration in ClassBody. It has to be something else that is a block scope. SYG requested I remove that. It doesnā€™t affect the spec text. So PR 220 removes that. That is editorial. Itā€™s an error that can never occur. I want to make sure that that is ā€“ people are aware of the change in case thereā€™s no other question on that + +USA: That was a clarifying question you just clarified. WH supports on the queue + +RBN: I was one thing for UsingDeclaration. Thatā€™s day 3. Itā€™s a potential proposal. Cover that at that time. + +USA: Great. Thank you, Ron. And yeah. Thanks, everyone else for this discussion. Given that we are over time, letā€™s quickly move on. But Ron, please make sure to record the conclusion. Waldemar, quickly go over your topic? + +WH: Yeah. Iā€™m listed as a reviewer on the slide and I reviewed this. + +RBN: I appreciate that. Thank you. I will update the topic with the summary + +### Speaker's Summary of Key Points + +- PR#194 - DisposeCapability Leak in DisposeResources AO. To be solved by adding a NOTE. +- PR#218 - @@dispose in `await using` throws synchronously. To be solved by wrapping in PromiseCapability. +- PR#219 - Deterministic Collapse of Await for null/undefined. +- PR#222 - Missing lookahead restriction to ban `export await using`. +- PR#220 - Superfluous *ClassBody* restriction in Early Errors. + +### Conclusion + +- PR#194 - Editorial only. No consensus required. +- PR#218 - Has consensus. +- PR#219 - Consensus on collapsing **all** null/undefined, needs PR updates for consensus on sync throw from @@asyncDispose behavior. To be revisited by Day 4 of plenary. +- PR#222 - Editorial only. No consensus required. +- PR#220 - Has consensus. + +## AsyncContext Stage 2 updates + +Presenter: Justin Ridgewell (JRL) + +- [proposal](https://github.com/tc39/proposal-async-context/) +- [slides](https://docs.google.com/presentation/d/1ok6fX9PN3XEv9ZwffrDzJX24uuiNrkGDZN-KgGwGkc0/edit?usp=sharing) + +USA: All right. Then moving on, we have Justin with AsyncContext updates. JRL, are you ready? + +JRL: AsyncContext update. Itā€™s been a while since we talked about anything, but thereā€™s been a lot of progress in the background while meeting + +JRL: First up, `AsyncContext.wrap` is back. Itā€™s now attached to the Snapshot class, as a static method. It is exactly the same as the old behavior. And it is exactly the same as if you were to create a new Snapshot and immediately snapshot that run to that snapshot instance and pass the function, whatever you want to do. We introduced it based on feedback, that it is more convenient if you are only passing a single callback to another library and you want to wrap it for whatever reason. Itā€™s more convenient to use a static wrap that had to create a snapshot and ā€“ anonymous IFFE or something that will then invoke the snapshot with the correct run context. Snapshot still exists because it is the more efficient API when trying to invoke multiple callbacks in the same context. But wrap allows users who are trying to pass callbacks to another place to efficiently pass a single callback and ensure it exists within whatever scope they want. The code at the bottom, itā€™s exactly the same as it was before. + +JRL: We fixed our `FinalizationRegistry` support. The FinalizationRegistry will snapshot when constructed. Not when you call the `.register()` method. Based on the pattern that other observers are going to be implementing, particularly MutationObserver, PerformanceObserver, all the observers that exist in the HTML side of the spec, they all batch their updates and invoke their callback a single time with all the updates that happened since the last batch. That makes it impossible for those observers to invoke the callback within multiple different contexts when you were to `.observe()` or `.listen()` or whatever they are doing. The APIs for registering. Instead, all of those observers snapshot when they are constructed and they will restore that snapshot whenever invoking all of the updates. We chose to follow that pattern. So all the observers, snapshot at construction, restore that while invoking. In this case, you will see that we created the context one, when we initialized the registry so that is what is the snapshot, the call back will be invoked within that snapshot. You register at any point later on and whatever the object is freed, we will rerun within the same context. + +JRL: We have updated snapshotting of generators. If you remember, from the last time we presented, we are going to be snapshotting the init context of a generator and every time that you call .next of that generator, it will restore the init-time context. We created, invoked it at the time. When he first invoked it. It will restore. Hit the yield, it pauses and whenever you have resume the generators by calling next, it will again restore the init-time context. The only change that has happened since the last meeting is that we changed the spec text so that spec internal generators, i.e., iterators to easily implement iterators do not snapshot anymore. The only case that this was actually observable is array prototype values and only if you have a getter on your array object. We donā€™t want to force implementations to redo how they are doing the spec iterators because they are heavily optimized, particularly array. So we are just ā€“ the spec internal iterators donā€™t have to do any snapshotting. Only the user generators, the code, the actual source text that the user is writing will perform the snapshotting at init time behavior. We fixed promise unhandledrejection. We told you that the promise ā€“ unhandledrejection fires in the context of reject. So in this case, we created a context one, and then we created an unhandled promise inside of it through a foo error. + +JRL: At the time, when that dot was invoked and before we hit the throw, we executed within the one context. And then once that rejection happens, we expect the unhandledrejection to capture that one context so that you can determine what the context was and what the ā€“ your values were when dealing with unhandledrejections in the event listener. + +There was a bug in our spec implementation that restored the old snapshot too early. Meaning, this would have actually executed within the undefined context, the context that was immediately prior to this, making it not useful for actually determining what went wrong in your application. + +JRL: We fixed this now. It is now behaving the way we have always told you it will behave. And it just allows you to do the way we have always described it. So this is the correct behavior, itā€™s now properly implemented in our spec text. + +We have designed several implementation strategies. Currently, the spec text uses a map, and extremely useful to understand in the promise is understandable. And thereā€™s no way to get it wrong. It works well. However, it encourages a high memory uses, when ever a best of your knowledge of nested run calls, V1.run and runs of these AsyncVariables running, you have youā€™re a map that is N + 1 and N + 2 entries, and itā€™s quickly overwhelming in these pedantic cases. This is downside of the easy strategy + +JRL: The other way to approach this is with a linked list, is extremely fast to run. You donā€™t have this case where youā€™re allocating in N + 1 and + 2 and + 3. + +Better for memory usage, but worse for runtime. There is a problem with the link list approach. If you have a nested run that overwrites a previous variable, we have an object and V.run another object, that object, the original object is no longer observable from inside of this context. Yet it still exists. At some point in the future you need to compact your linked list. So that object can be freed and it no longer ā€“ any FinalizationRegistry attached should be run. This is complicated. Thereā€™s only certain areas you can do this and depending on whether you use a mutable strategy, it incurs memory usage. The other implementation we have considered is the HAMT, a hash array mapped trie. It is exactly like the link ties, but you have multiple branches from the root. You have a link that has all entries, you have 32 link lists that have 132 of all entries in that particular branch. Recreating a particular branch is cheaper if you run into this case where you have a `V.run` or multiple overwriting runs happening. You can recreate a branch that has a smaller number of entries inside of that branch. + +And then you donā€™t have to worry about compacting because you have already done it during the run. + +JRL: It has the same performance benefits as link list does. You donā€™t have the memory usage. And `V.get` is a bit slower. But overall, it solves the exploding memory problem better and it doesnā€™t have the compatibility issue. Thatā€™s the three strategies we have investigated. Theyā€™re all in the design doc. + +JRL: We have Test262 tests. Itā€™s a comprehensive test suite for everything we are aware of. There are a couple of things to do after iterator helpers lands and maybe thereā€™s a case we havenā€™t seen. Maybe thereā€™s a case weā€™re missing. But thereā€™s like 150 test cases in this already, and we will keep adding them. Itā€™s up to date with the latest spec text and covers everything we know about + +JRL: We are working on HTML integration. We are collecting all the methods and events that will need to change to interact with AsyncContext. We then need to determine if itā€™s like init-time for the observer. If itā€™s the registration time for the event listener. The call time for like test attribute. + +JRL: So weā€™re trying to go through everything and determine exactly which behavior they are expected to hold. We are using task attribution and zone.js as prior art for AsyncContext, as a reference point for what it should basically be. But weā€™re not sticking to those as ground truth. We will decide exactly which thing is appropriate in all cases. + +JRL: The spec text for this, we will have a complete ā€“ or we will have all the spec text that we are aware of open and pending on to the HTML and WHATWG PRs to make sure everyone is aware of what changes will be necessary and to get all of the implementers to agree to that. Once we have all of the PRs up to date with the current semantics, and behaviors that we want, we will be asking for Stage 2.7 at that point. This is the only thing that is stopping us right now from asking for Stage 2.7. Everything else, thereā€™s no bugs we are aware of, no behavior we need to change or anything. We are in a pretty good state. + +JRL: So our next steps, we have spec text. Itā€™s complete. Weā€™re at Stage 2. Asking for 2.7 once we have the HTML integration. Actively investigating the HTML integration side and opening the spec text with exact behavior we expect. And keeping it up to date with the state of our proposal. + +We are investigating the implementation for V8, thatā€™s where we are investigating the linked list versus HAMT versus map design. You can desire the design doc for what V8 is looking at. We have the Test262 test suite. It is open, we are just waiting for us to hit Stage 2.7 to officially open the PR and merge that. + +We will be going for Stage 2.7 sometime this year, when we can get the HTML spec text complete. We are open to everyone getting involved in this proposal. We currently are meeting every two weeks to discuss updates and what we think should happen, any issues open on the GitHub repo. You can join the AsyncContext channel in matrix if you want to keep up with any of the chats. And we are always happy to accept PRs. All right. Is there anything on the queue? + +USA: Yes. Thanks for asking. And we have a few minutes. So letā€™s go through the queue. But please try to make it fast. First we have ACE. I donā€™t think you would speak. But feel free to. But yeah, ACE says that the agenda needs a link to the slides. JRL whenever you put it there. + +MM: A quick question about the unhandledrejection, in order ā€“ first of all, semantically, I like the solution if we can afford it and I do hope we can. At the time that the rejection happens of course you donā€™t get no in a itā€™s unhandled, if you never need to track the context through promises. It always control flow. Like with the registration. Would this be the one thing to track the context associated with a promise? + +JRL: There is a spec hook called handled, unhandledrejection, that HTML implements, and allows it the tracking for unhandledrejection. We invoke that hook during the promises rejection phase. After the throw this is the handle of the completion value, we determine that it is an abrupt completion for a throw and immediately call unhandledrejection hook. That hook needs to have the current context view whatever the hook ā€“ the context of the promiseā€™s execution. Before we restored the previous hook, the previous context before invoking the hook. Now we do it immediately after invoking the hook. The hook is able to get the promise, the current promise that the promise is invoking within and take a snapshot of the state and restore it later on, whenever itā€™s invoking the unhandled rejection handler + +MM: If everyone thinks thereā€™s no implementation problem, since I like the semantics, I am fine. + +USA: Great. Next on the queue, SYG says, the champion for the data structure exploration. And thatā€™s it. + +JRL: Okay. So I will add the slides to the agenda. Sorry, I was travelling yesterday, so I didnā€™t add it. Thatā€™s it for updates. + +MM: Question. I thought that there was ā€“ sorry. I thought there was a controversy about some kind of synchronous finalization or deterministic finalization, not depending on garbage collection when a context is done. What was the status of that? + +JRL: We had previously talked about task ā€“ context determination. We are not pursuing that in this proposal. It was never actually a part of the proposal. It was something we could thought could happen as a follow-up + +MM: Okay. Great. Great. I prefer it not to happen. So thatā€™s wonderful + +JRL: I think we have agreed at this point, we are not going to pursue it. Thereā€™s an update yesterday, I think, DE or someone commented on the PR. Not on the PR, on the issue. If anyone ā€“ I wasnā€™t prepared to answer this question. Iā€™m sorry. If anyone has a definitive answer for this + +DE: Nobody is working on that feature now, on task termination, as far as I know. It would be interesting, but there is nothing to fight against because itā€™s just literally not happening. + +JRL: Great. Thanks. + +USA: All right. That ā€“ Justin, are you finished? + +JRL: Yeah. That was it from the updates, as long as thereā€™s no more questions, thatā€™s it. + +USA: There was a summary ā€“ well, put a conclusion, I implore you to do that and a summary in the notes + +## Deferred import evaluation for Stage 2.7 (without "tree-shakeable" exports) + +Presenter: NicolĆ² Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/) +- [slides](https://docs.google.com/presentation/d/1oPEF8nA9Iq5cAqjN-FqMigNNfz6lWCUbNfIsEjRXf4Y/) + +NRO: So hi, everybody. Deferred import evaluation for hopefully Stage 2.7. + +So for this proposal, it gives you an easy way to import modules while deferring their evaluation when itā€™s actually needed. And the module will be ā€“ only some large bases, as part of the start up cost is ā€“ but that is not always needed and so given an easy way to avoid that gives the start up time of the applications. There is a big elephant in the room here that is top-level await. Because deferred modules need to be evaluated synchronously. Otherwise, if you can defer an await, just use dynamic import ā€“ it doesnā€™t play well with this. Itā€™s obviously asynchronous. So the proposal ā€“ the decision we made was that async models are still eager evaluated. When deferred, it will look for synchronous dependencies. This is not like ā€“ it doesnā€™t parse the module for being completely. Itā€™s the best for optimization. + +NRO: So to clarify what I mean, we have this model structure, where the dashed arrows represent deferred imports and a module that uses await. + +So when we run our, the initial executed on graph, it includes the non-deferred depend circumstances on the left. And include the asynchronously. First we evaluate all this part. And then when later at some point, for some reason, we trigger evaluation of the deferred module. It will evaluate the remaining parts. And this is if we were looking to have to redesign the way to the semantics is equivalent to moving the ImportDeclaration from the async module into the module thatā€™s doing the deferred import. So if the entry point here has the top, this import deferred declaration is ā€“ and ImportDeclaration. And then the namespace is the of the module. So why this? Not use dynamic import given it is already in the language to delay code execution? + +Dynamic import is going ahead for improving this type of start up performance cases. But it forces you to use await. And this means that you need to change all your callers in the dynamic import is changing in a call but asks somebody else, to a library or some host function, that needs to be able to handle async callbacks. It has a very high friction when you introduce it. + +We are willing for something that is maybe not as good as dynamic import but match at large scale. + +What performance improvements are we talking about? We have like a few cases to this. + +First exactly what all this is escaping. So when we have import defer, it keeps most evaluation and synchronous. But it forces us to full model graphs. While import, dynamic import, it keeps ā€“ however itā€™s asynchronous. Thereā€™s a third case here thatā€™s not ā€“ doesnā€™t directly come from the proposal, but a way the proposal can be implemented in the environment. Import defer is the general way of defining import defer. I will specify how to be implemented in a way that works everywhere. But in platforms, models, you actually get more benefits because you donā€™t need eager loading anymore. You already know if this is await or not. It be loaded very fast. What are these environments? Some of them are React Native where you have a bundled set that would go together. But for the purposes like where you have to preload the code and code executed when there are requests. Or browser cache. And maybe itā€™s already compiled. But the code is there and it can be pre analyzed + +NRO: Some concrete case studies, react native. BlueSky uses React Native. As I mentioned, they are available. And it doesnā€™t run. It uses lazy ESM to CJS. They basically run with the import defer semantics. And they know there is no await because compiling doesnā€™t support. So they mark everything as lazy. Start up time improves by roughly 30%. You can go check the numbers in the pull request. + +NRO: And another case study is the Element web app. It's a very big app bundled with Webpack. Everything is on the web. Everything instead to be loaded. And only deferred execution. This is the general case. And the way I approached trying to introduce import defer was to start from the entry points. Look for all the declarations where they are obviously not needed at startup. If import and mark it as defer. And I did this for like a few thingsā€¦ maybe half of the incompetent porting was as lazy as the third. + +And something is I could not blindly do it because some modules did rely on the relation order And the results were that we were ā€“ this was on the Web page. We were able to skip roughly 50% of the startup time before the page and maybe interact with it. These results could also be obtained by moving part and the modules and loading them. But it was easy to add import defer compared how hard it is to add dynamic import on the right places which makes some code asynchronous. + +NRO: So what changed since last I presented this proposal? We added import of defer syntax that dynamic pre-loads the model. This is for symmetry. This is the same for other phase. + +We change whenever evaluation happens. Rather on non-exports, it would now happen on access of any stream property on the object. This is so that you can ā€“ even in the dep model doesnā€™t have access, it triggers relation. You can trigger relation from modules with and more tool-friendly because your string property access will trigger the relation as you need to know the spores of the model not related yet. + +So, for example, this is how the webpack implementation works. + +NRO: Another update this is probably the most significant one, the Wasm integration doesnā€™t use await anymore. The reason the Wasm integration used the await was due to some constraints presented through the null implementation of WebKit. In the meantime, itā€™s rewritten and now has ā€“ doing WebAssembly instantiate too expensive and that doesnā€™t use that anymore. This means that the Wasm model has capability with import defer in which they can properly be deferred. + +NRO: There are other cases in which you would use defer await. Now the main resource types on the Web can be loaded without top-level await. They are capability with import defer. Even the most important one is WebAssembly because it has dependencies in mind. + +NRO: So there are some open questions to ā€“ that came after the Stage 2.7 reviews. Unfortunately, they were presented after the deadline, so this section of the slides is after the deadline. + +NRO: One is that it can set models on any string access on the namespace. It might be better to just throw. When trying to import with exports. Like, the model with exports only useful for the side effects. Without deferring it. I would love to know your opinion about this. + +NRO: And the other one is that what should happen when accessing properties on a namespace to a module that didnā€™t finish. Right now you can get access to namespace that is fully related to cycles. And some of the properties in namespace present, other properties will throw a TDZ error if theyā€™re not been ā€“ for letting constant variables they are not initialized yet. And this proposal doesnā€™t change that. However, the proposal makes it easier to get reference to namespaces of module that didnā€™t finish awaiting. Because we are doing I am export defer model and defer that we access a property, somebody imports module, it drives it to completion and the model throws. At this point, the namespace of the model already, in a non-finished way. So should we ā€“ if there ā€“ like, is it okay if we just follow the existing namespace semantics where accessing namespace would work but not like throw a TDZ error to the module being fully awaited or change the proposal to make it ā€“ mark consistently throw the model relation error in this case? + +NRO: Yeah. So what about re-exports? In September, November, I presented an extension of the proposal about having deferred exports. Loading. Like I called it tree-shakable exports because itā€™s a name I used in the system for removing branches from the module graph when theyā€™re not used. + +NRO: While the spec text, we realized like a very different proposal from this import defer, because it actually affects loading and it has ā€“ it doesnā€™t have the same constraints. + +So I think the import defer proposal is ready for 2.7. However, the export tree-shakable or whatever the key word is, needs more work. If it runs to 2.7, I propose that it be a separate proposal and remain as stage 2. + +NRO: So I have worked on Stage 2.7 requirements. We have the complete spec text. Reviewers. I pinged the editors too late. So we donā€™t have reviews yet. 2.7 is conditional on editorial views. And thatā€™s it. Letā€™s go to the queue. + +USA: Okay. So first in the queue we have KG. + +KG: Yeah. This isn't necessarily a concern, but I want to run it by you. So you don't defer network requests, unless I have misunderstood. You still do the fetching, but not evaluation. The benefits accrue only in the cases you have your import modules and the modules have side effects. If you are importing modules that have only a bunch of function declarations, as is best practice, then you donā€™t get any benefit because evaluation of a module that contains only function declarations is essentially free + +The benefits only accrue if there are side effects, but these are the cases that are most likely to be problematic to refactor in this ways because if you are depending on side effects reordering things to the side effects are triggered instead of a deterministic point is like most likely to mess something up. + +Sure, it doesnā€™t matter if you have just function declarations being evaluated at one point or another. But if youā€™re actually having side effects then it does matter. And this proposal only matters at all if you have side effects. Is that understanding correctly? And how do we feel about this, if so? + +NRO: There are many cases in which you have local side effects. So not just function declaration beings but also global side effects. Some examples are initialization of objects or you might have classes with decorators. The local side effects. But yes, there are ā€“ if you look at, you find they are not exploring, but local states. And that state is ā€“ has some cost to initialize. + +KG: So we are worried about not getting initialization, but it doesnā€™t matter the order in which the initialization happens. + +NRO: Itā€™s mostly local initialization, it doesnā€™t affect global state. But itā€™s still local. + +KG: I guess I ā€“ I am curious what fraction of cases this would actually apply to, because it has been my experience that most people either maintain the discipline of not having any side effects on imports, global or otherwise, or extremely minimal side effects, maybe setting up an array of constants or whatever. Or people donā€™t care at all, and they have global side effects on import. But perhaps thatā€™s not an accurate understanding of the ecosystem. + +ACE: Yes. Repeat of what NRO said. So we have implemented this at Bloomberg and have been rolling it out across projects. And we are definitely seeing a benefit, when there are lots of modules, even if all they are doing is exporting declarations, it adds up because maybe if itā€™s just exporting `export const = PI`, thatā€™s cheap. But like NRO said, itā€™s common for people to build up large configuration objects because itā€™s easier to create them programmatically rather than literally, if there is a literal, it comes as a cost when the project scales, like maybe each one is cheap. But when you have like tens of thousands of these, it adds up and we are seeing a good performance increase. And we definitely advise people to not have side effects in their module loading. And if they do, we treat that effectively like a bug ā€“ when we detect this as we rolled it out, we are working with the teams to refactor the code so they donā€™t have side effects. + +KG: So I guess one of the things that affects my thinking about this is that the case I care most about is the Web, and on the Web, thereā€™s a lower bound cost to any import which is the network round trip. And so I would expect that - and perhaps I am wrong, but I would expect the cost to dominate the cost of building up a configuration object, for example. Not, you know, if youā€™re doing something more expensive, if youā€™re synchronously instantiating WebAssembly, that takes more noticeable time. But if you're just building up a configuration object, I would expect the network request to dominate that time. It wouldnā€™t if you were doing something locally, as I believe Bloomberg is. So while I believe your experience, I am not sure that itā€™s necessarily applicable to the case where weā€™re loading modules from the web. And I wouldnā€™t want this if it only provides benefits to features outside of the web. + +ACE: On the web, as NRO said, you get the benefit, the second time around, when the module is cached. The first load on a slow connection, that will dominate the overall loading time. But on the cached case, even the web, you get that cache on the second visit of the site. If the cache is still valid of course. + +KG: Yeah. Thatā€™s true. + +DE: Thereā€™s also a first load benefit on the web. The thing is native module loading on the web can only be used for bigger chunks of modules of the pattern, you pull together a bunch the modules and then make those into HTTP level resources and use a CJS or EMD loader or compiled away loader instead within that. Import defer is useful within those chunks. Code splitting in terms of HTTP resources ā€“ with import for and dynamic for the outer part. Import defer for the inner part. So I think the time when this will have the most benefit on the web is once we solve the problem of native ES module loading, which module declaration should help us solve. Right now, in practice, modules are this tool level feature. So yeah. It could help import defer, when things are in cache and thatā€™s an important case, but itā€™s probably not a good idea to deploy modules too aggressively regardless... + +KG: I may have misunderstood the first thing you said. It sounded like you were saying this is beneficial if you are using a bundler? Because ā€“ + +DE: Yeah. + +KG: The bundler will have semantics. + +DE: Yeah. This is a benchmark using bundler. And code splitting and thanks to JWK and NRO, we have web apps supporting import defer. And this is how it will initially provide performance benefits to the web. Besides the case where it is cached which does matter. Today, you have to use bundlers enough such that youā€™re not shipping way too many individual resources. Thatā€™s just sort of the baseline. You are currently not using ES modules except for a limited number of them, regardless of whether you use this proposal. + +KG: Yeah. Okay. I guess I am not strictly opposed to putting features into the language if they benefit bundlers, I am a little bit weary of it. + +DE: We will recover from this weird state once we have module declarations. Then we will have a kind of more direct usage of this from native ES modules, once they become a responsible thing to ship. + +CDA: Sorry. We have less than 5 minutes and we have got some other items in the queue. + +KG: I worry about shipping features that we expect to become useful in the future. Instead of just waiting for the point at which they become useful. + +CDA: SYG, did you want to respond? + +SYG: I will skip what was ā€“ what I originally put in the queue. For KGā€™s concern, I agree with, like youā€™re asking browsers to ship something that will not get direct use, that might get direct use contingent on a future thing which means we have to be predictive enough in the design that comes the time for native adoption that we made the right design today. I am not confident we can do that correctly in general. I think we have plenty of prior examples where we predicted incorrectly. I share the similar wariness that if this is mainly for bundlers, for the foreseeable future, I donā€™t really understand why the standardization. The best argument I can back form for why we have to standardize this, because itā€™s not semantics preserving. Because it reorders evaluation order, you need the programmer to signal intent to opt-in that I am okay with evaluation order being reordered. And that you would need some way to convey that in the language and thatā€™s why you want to standardize. If the implementation and the experience which we get from browsers is completely divorced from reality because nobody is using it directly, that doesnā€™t make me feel good. + +SYG: And as a segue into the next point, the big design question for me is still like I donā€™t understand why itā€™s okay for the top level for the presence of top level in a subgraph to basically silently completely custom the behavior order of the feature when you use import defer. Like, what is the benefit of silently eagerly evaluating instead of throwing? + +NRO: Itā€™s not completely disabling import defer. Just disabling for the minimum async graph. So you can import defer from the different parts and unless the module starts using await, most of the parts will be deferred. If we were to throw, it would just be completely incapable. + +SYG: In my opinion, this is not making is work with top-level await. This is making it work ā€“ this is making it composed with top-level await in a surprising way that may be not non-local. Like you have to figure out in your dependency graph, where the top-level await showed up, for example. If somebody changed one of the dependencies top level down, now the evaluation order and the timing of your import defer were up stream like completely changes. That seems wild to me. That seems undesirable in every case I can imagine. + +NRO: Are you suggesting we should throw instead? + +SYG: Yeah. I am suggesting that it behave as deterministically as possible with local information at the import site. Like, because this is non-local, because await can happen somewhere in a module subgraph, to have that does something silently completely different, I think itā€™s very surprising, but if you make a throw, then at least the importer gets told that, like, a surprising thing would have happened and you should fix that. Like I donā€™t know if thereā€™s another solution other than throwing + +NRO: Iā€™m worried about throwing because it doesnā€™t give an actionable way. If the dependency using the top-level await, your only option is to completely remove import deferā€¦ or go deep in the model graph to all the other branches to make sure everything else is still deferred. + +SYG: Right. Isnā€™t that the point? If you want to defer evaluation and a big bunch of the subgraph becomes non-deferred because of something that was non-local and non-obvious to the importer donā€™t want to do a deeper dive] into the module graph? Itā€™s not a best effort optimization. Itā€™s a semantic changing thing. Itā€™s not a semantics-preserving optimization. + +NRO: Thereā€™s code that you might not control. You might have like multiple external dependencies that have dependencies, there is no way for you to go and change all the dependencies in the chain to that. + +CDA: Okay. We are past time. + +NRO: Okay. + +CDA: There is a long queue of items. + +ACE: Could we use the next slot ā€“ considering the next item is the second half of this proposal? + +CDA Yeah. This is a ā€“ Nicolo, you have the next topic. So basically, you have all the time between now and ā€“ + +NRO: The next topic is 30 minutes, right? + +CDA: Yes. Correct. + +NRO: Letā€™s give at most 15 minutes for this. I would present ā€“ could you please make sure that we donā€™t use more time than 15 minutes to continue with this discussion? Thank you + +CD: Yes. DE took it off, but was it relevant to jump the queue? Itā€™s fine if it is, if itā€™s relevant for ACE to talk about it now. + +DE: Please. That would be good because I think Nicolo was giving theoretical arguments and Ashley has practical experience for why. + +ACE: Thanks, DE. + +So I can see where youā€™re coming from Shu and the natural reaction to the proposal. In practice the concern doesnā€™t hold at least from our experience of using this for over a year, one introducing top-level await to is already kind of ā€“ itā€™s a big change to things, itā€™s going to change the order of execution to the app. Even before this proposal, I would hope developers are somewhat conscious of the impact of adding top-level await. And in the future an extra thought of that is, that it cuts off this particular optimization and I think maybe some of the reaction is maybe the word defer, when someone is saying, import defer, that actually we are not deferring everything, the loading of the file, the parsing of the file or evaluation of top-level await. But we are doing our best effort to defer. Itā€™s not just that we can leave the synchronous execution by remaining and by definition we canā€™t defer asynchronous discussion to a synchronous point. If people think that is import as optimally as you can it kind of makes sense. Why this is important for us at Bloomberg, really what we see is top-level await used quite low down in the network graph, in the module graph. It's like setting up low level things. Itā€™s rare that people need top-level await. Module initialization, apart from interacting with foreign function interfaces to set something up that they want to provide synchronously later on, rather than make everything they export an async function because of the one in initialization. The amount of the module graph cuts off from the optimization to be very, very small. And also, itā€™s very low down, as NRO was saying. The person doing import defer at the top of the graph, they may not have a natural way of referring to that top-level await module. Like, itā€™s so far removed from them. And really, what they are saying is, I want this particular dependency, which is many, many levels away from the top-level await, please defer that and the fact that we then defer all the way down until we hit the low level thing and they not aware of and but we can initialize the low level thing because someone looks like they need ā€“ but only if someone looks like they need it is a big win. If that helps. + +SYG: If I can respond real quickly to that. I think the ā€“ one argument I heard was that top-level await already changes ordering. My understanding is that this changes ā€“ import defer changes ordering in a more intrusive way, in you introduce top-level await, the ordering that changes is like in the import phase. Sorry. Phase is the wrong word. A different phase of the application. Import defer interleaves top level module evaluation with arbitrary other code evaluation. It defers it much longer. And that kind of interleaving and the thing that is interleaved, being dependent on non-local top-level await deep down in the module graph, like thatā€™s the thing that worries me. TLA, yes, it changes ordering, but itā€™s localized to still the startup phase of the application. + +Yeah. Letā€™s take the rest of the queue. Thatā€™s my only response to that. + +NRO: Can we finish discussing it and go back to your point? + +JRL: Weā€™re still sticking on the same queue item? + +NRO: Yes. I would go to JWK now. + +JWK: Replying to SYGā€™s concerns about non-local information, you donā€™t want the deeper module in the graph will affect the execution ordering of the current module. So in the Webpack implementation we have an option called ā€œrequire sync assertsā€, which means if you want to use defer, you have to assert this module is synchronous. But to emit the synchronous linking error, you also need to traverse the whole graph to find the top-level await module. But by this way, if a module suddenly becomes async, you are not change the execution order, but have a fatal linking error to prevent foot guns. And I have another thing to comment. You donā€™t want to ship anything in the browser because it might be used in the future (not now), natively in the browser. If the browser is not doing that, and itā€™s not in the language, then the toolings will have its own conventions for this. For example, Webpack, we are doing magic comments which basically you have a comment, /*webpackDefer: true */. And this comment is inside the import statement (import * as mod /* webpackDefer: true*/) then it becomes a deferred module import. And imagine if this feature is widely adopted and people write webpackDefer, viteDefer, etc, I donā€™t think this will be good for the community. + +SYG: There is value in standardizing this. That is a different value than having all the runtime implementations support it. It happens we cannot separate the two today, but that is not set in stone. I agree there is very well in standardizing this for the take for the tools + +NRO: This is not only the tools, but web implementations. + +SYG: Sure. There is value in standardizationing this. But there is ā€“ itā€™s like two coupled ā€“ we have a single lever today, which is we change the language. When we do that all implementations, including the browsers have to do the thing. But the value of the impact of the feature has to be different from space to space and we should probably decouple that in the future. + +NRO: MAH? + +MAH: Yeah. I mean, I want to to say, I wanted to get back to what with KG and SYG how this feature ends not being directly useful on the web only because the deployment model of the web is not to use directly modules but rely on bundlers that do basically the work, the sort of shim work of the engine in handling modules. So I think this means itā€™s useful in the language. It just happens itā€™s not the JS engine in the web platform ā€“ in the web environment that implements this, but the bundler that implements this for the web because thatā€™s the deployment model for the web today. + +SYG: Yes. I agree with that. + +KG: Yeah. To be clear, when I say "not useful on the web", I donā€™t mean it doesnā€™t provide benefits to websites. I mean that implementing in browsers specifically doesnā€™t provide benefits to the web. + +JWK: Can you only ship this to nodeJS without shipping them through the browser + +SYG: I think that is a bigger thing. A future that is certainly possible. But it breaks with norms that the community, and TC39 itself has, like how compliant implementations, what that means. I think that deserves its own discussion, but itā€™s possible. + +NRO: Yeah. Thanks. I think we should keep the discussion whether we want to like ā€“ to some other time. Can we go back to finishing the topic in the queue? I think John. + +JDD: Yeah. I just wanted to raise awareness of like nodeā€™s TLA handling, they have experimental support for synchronous ESM and debug features around root causing TLA. So that is something to be aware of. + +JDD: The other thing I wanted to mention was that top-level await is kind of a wart for those scenarios already. So this is kind of nothing new. When this concern came up, I wonder if calling this instead of defer is like maybe a hint, hey if this canā€™t be done, do this. The other thing to note was, I am big-tent JavaScript. I donā€™t like gate keeping for browser only functionality. I find that trying to cram browser-only functionality into like node creates conflicts and less optimal ways of doing things. So I like keeping an open mind and figure ways to adopt things that arenā€™t necessarily browner specific. This is an appendix or the fancy language or the wiggly language of implementation dependent things like before, I am up for. I recognize that bundlers are doing the work for browsers because itā€™s not feasible to ship ESM. For browsers to stand up to say we do this but rely on tooling to do this seems disingenuous and I donā€™t like it. I am open to the possibility of figuring this out. + +KG: You raised the question of what to do about importing modules that donā€™t have any exports. I do not like throwing at import time in this case. Because there is a sort of obvious optimization that involves stripping out dead code. And stripping out the very last thing in a module shouldnā€™t suddenly cause importing that module to throw. Throwing at the time you try to access it, or just like never throwing, these are all better. + +NRO: Okay. Thank you. + +KG: Concretely, think about importing an empty module. That doesnā€™t throw, even if itā€™s deferred. + +NRO: This is a change. But yes, it creates nice symmetry between everything. + +JRL: Okay. It looks like the queue is cleared up. + +We can move on to the next topic. I am not sure the split between this and your next topic also discussing deferred imports. + +NRO: Yeah. Letā€™s go to my next presentation about the export part + +### Speaker's Summary of Key Points + +- The proposal was re-presented with the following differences from the previous update: + - `import.defer(...)` dynamic import syntax + - Evaluation happens on any string property access on the namespace + - The Wasm-ESM integration doesn't use TLA anymore, making it compatible with the proposal +- There were some doubts about the value of this proposal in browsers, given that most people bundle their code and don't use the browser's native ESM implementation +- The committee doesn't agree yet on how we should handle top-level await in deferred imports + +### Conclusion + +The presenter didn't ask for stage advancement due to the feedback received during the discussion + +## Treeshakeable/deferred re-exports status update + +Presenter: NicolĆ² Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/pull/30) +- [slides](https://docs.google.com/presentation/d/1iM5cRgdRXLWLq_GxgRvzYmUTXEK6gzH_8QNgLKMmv7o/) + +JRL: In the timebox, you only have 12 minutes left. + +NRO: I will just present ā€“ I will try to present in 10 minutes. + +JRL: All right. Perfect. + +NRO Okay. So yeah. This was the other proposal about exports that can be tree-shaken away built into the language. Why is this? There are many libraries like that exports many functions to a single entry point. And so that users donā€™t have to manually put every single internal file. Like the example, import all of the functions from a single import. And like this DX about. However, this is done like basically exporting everything from the entry point. + +NRO: However, this has a problem. Maybe you want to import 2 or 3 functions from the library. And so it makes it very difficult to use the libraries without a bundler or some tool optimizing code. + +To solve this problem, tree-shaking, where they remove all the unused branches from your module graph. This is in general not ā€“ there might be side effects. So they want to correct ā€“ Webpack expects libraries to be explicit marked on whether they can be tree-shaken or not. Or try to analyze the code to understand if there are side effects or not. Always treat the dependencies that are ā€“ that could have effects. The problem with this is the JavaScript is a very dynamic language. Itā€™s very difficult to detect when there are side effects or not. So like either removing too much or removing too little. + +NRO: There is another solution for this outside of the web in NodeJS because even if you are not doing this in environments, in importing a module graph, some common JS lines reports all of the functions through getters that only with the export value when needed. So in this example here on the screen, it would only require the module and the dependencies and ignores the others. + +So we have solutions and tools, but can we use language that is usable when you are directly using this in the browser? + +NRO: So what I proposed this time was to ā€“ I now think `optional` could be a good keyword, to mean only load this module if the binding export is being used. So I like the keyword optional. And I am using it, but itā€™s very much up for debate. + +NRO: So in practice this means you have these two models on the left. And marking this and dependencies as optional. The import to add will be respected and will be like did if you are going to transpile them, import to add in the main file. And the other ignored because itā€™s not being imported in main. + +NRO: So how does this interact with the other modifiers? Source phase imports. Well, you could just take the optional key word in front of your key word export statement. And does it mean only add the module and extracted source of the importerā€¦ so if foo binds is not I am ford for my module, donā€™t use the JS file. With deferred proposal, we can ā€“ namespace, with the can he word there, we are logging in the module if the foo binding is being imported by the importer and we are calling this by the deferred imports. However we integrate it in a way that preserves the benefits of having every single export, re export ā€“ that is only when accessing the one property. When I have later access to the two properties, I will execute the module that is value 2. + +NRO: So this example shown basically has a long list of key word. Maybe we should consider reducing that. Both optional and defer are in general safe to use with pure modules with models like, and about in quotes, but modules that donā€™t have global effects. And both try to reduce startup time by skipping logging and execution. There is some difference. Loading execution. Maybe itā€™s not meaningful enough difference to have two different key words. We are deferring execution of the module. We are deferring the decision to have the module. + +So in practice, it would look like this. Instead of having export optional and then a list of names from, we use export defer a list of names from and instead of having to mark defer on and optional on each single through re-exports, we mark as defer and about when itā€™s imparted in a namespace, it gives semantics. And the same for the namespace. There is only one problem that is maybe itā€™s weird when it comes to integrating export source from. Because if I will describe source and defer as different phases. But here we mix them to present loading rather than execution. + +NRO: So where is this proposal? Originally, presented as part of the import defer proposal. As I said before, I wasnā€™t sure it should be the same proposal. Itā€™s currently living in pull request, two different requests: one using two different key words. One using one key word. + +But the question here was, do we think this should be a different proposal or not? + +You can ignore the middle line. But yeah. If we think we should go with two different key words and keep them separate, if we think itā€™s better to just introduce a single concept in the language that it actually has ā€“ then they can stay in the same proposal. Anything else about whether these two proposals are actually the same or not? + +JRL: Thereā€™s no one on the queue. + +NRO: Okay. Yeah. I would love to get some feedback about what you think about the optional key word or whether it should be defer. Weā€™ve heard from developers that optional might not be the best key word because it seems to say try loading this module, but itā€™s optional, so donā€™t worry if it fails. If you have any idea about keyword, please, reach out to me. I see a comment in the queue. + +JDD: Itā€™s non-verbal. I find proposals work best when they are small. And easily to accept in. And when they get complicated, it ā€“ they fall apart. + +NRO: Do you have an opinion about using key order or not? + +JDD: I will need to dig in more on that. + +NRO: Okay. Well, thanks, everybody. I see Mark in the queue. + +MM: Yeah. I just ā€“ I prefer the `defer` keyword just because it ā€“ by using the same key word, it sort of puts in the same psychological space. It seems like thereā€™s one set of related concepts to learn as opposed to just yet another concept to learn. + +NRO: Okay. Thank you. Okay. Yeah. Thanks, everybody. + +JRL: Yeah. Actually, youā€™re early. We have 7 minutes until lunch. I guess we can either come back 7 minutes early or extend lunch by an extra 7 minutes. + +NRO: Letā€™s finish the discussion of whether the language ā€“ we should have different conditions for browser and tools or I feel thatā€™s a longer discussion. + +SYG: I would strongly recommend not do that in these 7 minutes. + +NRO: Okay. + +DE: I do think itā€™s urgent we get back to this because this is sort of a kind of new category of objections that apparently blocked Stage 2.7 for the proposal. I hope next meeting we can get to that topic. + +JRL: I think GB now put a queue item thatā€™s explicitly talking about this. I donā€™t think we can address all this within 7 minutes. And I donā€™t think we have time at this meeting to give an overflow. + +GB Wait this. This is about determinism. + +JRL: Is that the same ā€“ go ahead. + +GB: Yeah. This is not the same as the correspondence between bundlers and the web and the point SYG brought up. So maybe we can just ā€“ I can mention briefly on that topic as I was going on earlier. SYGā€™s argument sounds like he was saying between the existing TLA ordering and the import defer ordering, that thereā€™s this kind of longer deferral happening thatā€™s occurring along with the TLA ordering. And was concerned about introducing this new kind of earlier phase. I think like defer is definitely interesting from an evaluation point of view because it does introduce, you know, a new evaluation phase. + +GB: And itā€™s sort of ā€“ the fact that itā€™s able to be, you know, triggered within the evaluation of another module. The way that TLA is designed, though, as soon as you add a TLA dependency, your top level execution will complete, but the TLA execution will continue. Other executions can happen in the meantime. So I guess the argument I am trying to make is that TLA already has multiple orderings because of the fact that you can race it while the TLA is completing. + +GB: So thereā€™s this kind of sense that when something is part of the TLA graph it canā€™t always be erased that the TLA will be raised separately to the well defined singular graph that you can analyze and what executes to the post-ordering. That non-determinism is a TLA property and it already exists for TLA graphs. And itā€™s only the TLA graph that is going to be finished eagerly. Thatā€™s going to be integrated into that earlier phase for the defer. The synchronous, the parent synchronous subgraph remains the determinism. The non-determinism is a condition constrained to the TLA leads. I donā€™t feel thatā€™s an extension of any non-determinism outside of any determinism that already existed in TLA. Just to try and answer SYGā€™s technical question on that, I know people want to get to lunch, so we donā€™t need to keep digging into that one + +SYG: I want to understand the TLA determinism thing. The point; today TLA is already non-deterministic in that top levels of modules with TLA can race with top levels of other modules with TLA. Is that correct? + +GB: Right. Because while that TLA is completing something else, another top level execution operation could begin and beat certain earlier dependencies in that graph. So TLA ordering is not fully deterministic. + +SYG: Right. So the point I was trying to make about the different phases was that today, the set of things that are racing with each other is constrained to module top levels. With import defer now you are interleaving module top levels with other user code that is possibly not in module top level, like event handler or something. And the fact that TLA changes what code interleaves with the user code because some parts of the graph become non-deferrable. Thatā€™s the nature of my discomfort. + +GB: So just to understand, if you are referring to the interleaving when thereā€™s a top level deferred access? Or ā€“ + +SYG: The point of the deferral is that you donā€™t evaluate the module top level until you touch the namespace object in some way. And where you touch this namespace object could now be anywhere. Right? You can pass that namespace object around your normal code and the point when you need it, it will evaluate the module top level just in time. + +JRL: Sorry. We have now hit the timebox for lunch. I am officially starting lunch now. SYG, if you want to continue discussingā€¦ + +SYG: Maybe this is just between you and me. Do you want to stay 5 minutes to hash this out? + +GB: I donā€™t want to keep anyone from lunch. Maybe we can continue the discussion off-line or ā€“ yeah. I donā€™t feel itā€™s a reason to stay overā€¦ + +SYG: I am not asking anyone else to stay. I am asking you to stay. If you canā€™t stay, thatā€™s also fine. + +GB: Okay. I should probably drop myself. + +NRO: Sorry. I donā€™t want to interrupt you. I think itā€™s good to ā€“ if you are going to talk about this, other people can follow. I am sure we will bring up this proposal again at a future meeting. + +JRL: Either GitHub issue or in the matrix chat sound appropriate for this? I donā€™t think we have overflow time to continue discussion. + +JRL: So that officially lunch will end on the hour. So you have 59 minutes. + +### Speaker's Summary of Key Points + +- `export optional { ā€¦ } from "x"` or `export defer { ā€¦ } from "x"` introduces support for built-in tree-shaking, by avoid loading re-exports that are not used. +- This could be either the same "feature" as `import defer`, or orthogonal. The chosen direction affects integration with the other modules proposals +- There has been a slight preference for reusing the same keyword rather than introducing a new one + +### Conclusion + +- export defer is no longer part of the import defer proposal, and will be a separate Stage 2 proposal.List + +## Iterator.range for stage 2.7 + +Presenter: Jack Works (JWK) + +- [proposal](https://github.com/tc39/proposal-iterator.range) +- no slides were presented + +JWK: Hello, everyone. Iā€™m bring back iterator.range for Stage 2.7, I hope. Let me do a quick recap of what this proposal does. This proposal adds a range function to the language, which can generate numbers. Thatā€™s very simple, and it has an API shape like this. It adds a range method on the `Iterator` global. It has a start and an end option and returns an iterator. Then the option -- in option, we have step or inclusive, and inclusive is if the end number should be omitted, and this is a very simple proposal, although it went through a tough discussion about the semantics of iterator or iterable. And the final solution to rename this proposal to `Iterator.range`. And I want to bring this proposal to Stage 2.7 because this proposal has been no activity for one year, and the iterator helper proposal is shipping in Chrome, and I think it might get to Stage 3 or 4 recently, so I hope this proposal can advance to have implementer feedbacks. + +JHD: Yeah, I just want to confirm that Iā€™m understanding things right. So this creates a one-use iterator, that then you can use all the helpers on, and it can take numbers or BigInts but not a mix of the two, and it will similarly produce only one, but not a mix of the two, and thatā€™s it, right? Pretty straightforward? + +JWK: Yeah. + +JHD: Awesome. Thank you. + +MM: Hi. Last time I paid attention to this proposal, the big controversy that I certainly have opinions on was about fractions. Some people wanted to be able to, you know, have fractions and increment by fractions, and others, including myself, object that the roundoff issues in allowing fractions are just unsolvable and necessarily will confuse people, and that all the functionality desired from fractions can be gotten by doing a divide of the iteration variable. So in look at the slides, I couldnā€™t quite tell, where does this -- where does the proposal currently land on fractions? + +JWK: Itā€™s using algorithm like this to prevent the floating point number problem. + +MM: Iā€™m sorry, how does that prevent the floating point number problem? + +JWK: If you add a floating point number, it might accumulate the inaccuracy, but if you use multiplication, then -- + +MM: I see. + +JWK: You can control the inaccuracy in a very small range (when using multiplication). + +MM: I see. I see. That certainly is less bad than what I was worried about. Iā€™m not -- Iā€™m not at all convinced this is harmless. + +KG: MM, you do still have that problem that if you do, like -- if you have -- if you start at zero and your step size is `0.3`, then you end up with four steps, because in floating point three times `0.3` is less than `0.9`. + +MM: Yeah. + +KG: So, like, the issue with floating point is still possible. + +MM: Okay. So I -- I really donā€™t like putting users in situations where they can be confused by the difference between, you know, floating point and real numbers unnecessarily in cases like that. And everything that they would want from that they can achieve by doing division of an integral iteration variable. + +JWK: So do you mean we should disallowing floating point numbers, and only allowing integers? + +MM: No. No. I think we should disallow fractions. I think that numbers already support is safe integer, as well as is integer, but I think for this case, we want safe integers. And I think as long as the operands are safe integers, we should accept them as numbers. + +JWK: Sorry, can you repeat again + +MM: Yeah, so big nums of course have no problem at all, but I think that -- I think that we should allow numbers which have non-problematic integral value, and those numbers are exactly what weā€™ve already support with the test `IsSafeInteger`. So I think that the operands -- if the operands to `Iterator.range` are safe integers, then I think that theyā€™re fine. Thereā€™s no need to prohibit numbers altogether. + +JWK: What if they are not safe integers? + +MM: I think it should throw. + +JWK: Okay. I understand, but can you open an issue, so maybe if we need to talk offline. + +MM: Okay. I can open an issue. + +JWK: Thank you. + +KG: Yeah. Actually I have two responses. So the first is to the question of non-integral arguments. And I agree, Mark, that theoretically this seems like it would be confusing. But we have these -- like, this function exists across, like, probably more than a dozen language, including some very commonly used languages. And just empirically, it doesnā€™t end up being any more confusing than, like, of course people get confused by the fact that .1..2 is not .3, but the issues around the floating point math here are not, like, actually a problem in practice. Like, we just know this. Itā€™s perfectly reasonable to say here to ethically this seems like it would be confusing, but, like, we have literally decades of experience with this in other languages, and itā€™s fine. Like, itā€™s fine. We know this. So I would really prefer it to just do the simple thing and, like, yes, sometimes people are going to be confused, but thatā€™s inevitable when youā€™re doing floating point numbers, and thereā€™s an explanation thatā€™s not any more confusing than any other place youā€™re using floating point numbers, like, we should just let you write the thing that you can write in every other language. + +The second thing is I really donā€™t want to rule out `Infinity` as a valid upper bound, and if you constrain the range to safe integers, then thatā€™s a problem. + +MM: I agree with that. I agree that infinity should be allowed, and I agree that thatā€™s irregular when starting with safe integers, and I agree that `Infinity` should be allowed. + +JWK: If infinity is not allowed, then developers might have to write a randomly big number just to -- + +KG: Mark said it should be allowed. So I think we all agree it should be allowed. + +WH: I agree with KG for the same reasons that KG listed. I would be strongly opposed to disallowing fractions. + +SYG: Could you please remind me of the motivation for BigInt ranges. + +JWK: Itā€™s for the completeness, because it might be weird that a kind of number can do this, but the other cannot. If the decimal proposal may have primitive form, it will also have support. Developer might be upset if they want to use BigInt, but this function only provides for floating point numbers. + +SYG: I would like to hear more concrete use case that BigInts are used for counting. Iā€™m not opposed to the idea. I just generally find completeness is a fairly weak argument. I can readily believe the use case if I just see one, basically. + +JWK: Okay. So, actually, I donā€™t have really strong motivation for BigInt other than completeness, if thatā€™s a strong requirement, I can remove it. + +SYG: I think my preference here is that if we are going with -- I apologize for not having reviewed the spec ahead of time ā€“ if this new API is following the stop coercing things default guideline and we donā€™t coerce BigInts -- well, I guess in any case, we shouldnā€™t try to coerce if you pass a BigInt into numbers -- if we start without BigInts, then it is a backwards compatible that would include BigInts in the future if it starts with throwing BigInts. + +JWK: Yes, it does not coerce things. + +SYG: Excellent. If anyone demonstrates a need for BigInt ranges, Iā€™m happy to back them. + +WH: I have a preference for including BigInts. The API keeps Numbers and BigInts separate, which is good, and I think it would be a mistake to omit BigInts. + +WH: The other issue I have is that there are some bugs in the algorithms as written due to type confusion. It confuses mathematical spec numbers with BigInts and floating-point Numbers. It does an ā€œisā€ test on floating-point numbers to check if something is mathematically zero, which will do the wrong thing. So the intent is good, but the wording could use some cleanup. + +JWK: I may need some help about this, because Iā€™m not very clear about the numbers stuff. Yeah. + +WH: Sure, I can help you with that. + +JWK: Thank you. + +MM: So I see you put issue https://github.com/tc39/proposal-iterator.range/issues/64 on the screen. So, yeah, I want to add an issue when I saw one. I was part of this discussion and I think this is covering the right ground and it is still open. And Iā€™ll just answer Shu, I strongly prefer that this thing does support BigInts. I think itā€™s -- you know, I would be willing to give up numbers before I would be willing to give up BigInters, because for BigInt, this is all very well defined. + +SYG: But the bar Iā€™m looking for is not that it is well defined, but that it is useful. Iā€™ll looking for some kind of indication that it is useful on BigInts, and one example was possibly give on the me on the Matrix. + +MM: We use BigInts in our code, a lot of places where before BigInts were used, we might have just used integral numbers. But we do use a lot of BigInts. Once youā€™re doing a lot of stuff with BigInts, itā€™s very natural to want to be able to mix in ranges of BigInts. Thatā€™s a very abstract argument, but I think itā€™s true. + +JWK: Yeah, I can be satisfied with that. + +MAH: More specifically, we use BigInts also for indexing, which means sometimes you need to be able to iterate, and range is a good iteration. + +JWK: You mean indexing arrays? + +MAH: No, indexing other serialized data. So we also serialize BigInts manually. In general, we use BigInts as a way of incrementing a number that will not overflow. And as such, we need to be able to iterate over those entries. + +JRL: Okay, we have another reply to Waldemar, plus one to Waldemar worrying about zeroes. I think thatā€™s the different topic, though, the previous topic. + +WH: Yeah, the specific issue I identified is that this spec contains conditionals like ā€œif *x* is zeroā€ or ā€œif *x* is *y*ā€ on floating-point numbers, and if *x* happens to be +0 and *y* happens to be -0, what does that mean? Technically it will be false, but you want it to be true. So Iā€™ll help with the details for that. + +JRL: Okay. So moving on, then, we have PFC. + +PFC: Hi. I want to support this proposal going forward, because I think it completes a gap in the language that I find myself reaching for, with reasonable frequency. And I prefer that something goes forward, even if itā€™s just number integers or safe integers. If thereā€™s skepticism about fractions or BigInts, Iā€™d really prefer that we at least move the core of the proposal forward during this meeting, which is number integers. + +JWK: Thank you. So looks like we are empty on the queue. And I wonder if anyone has strong concern -- if anyone has strong concern with any runtime semantics with this proposal, please raise an issue on GitHub or maybe now, and I hope there is none, so can we -- I want to ask for Stage 2.7, and I hope I can get some implementation feedback. + +KG: It sounds like MM was still concerned about the issue of non-integral floats. + +MM: I am, yes. I understand the form of evidence that youā€™re presenting, but that kind of experience-based evidence, Iā€™d like to understand better what the underlying phenomena is to -- in order to know how to interpret that evidence. Iā€™m not directly familiar with that evidence. I donā€™t have that experience. + +KG: Sure, I mean, the evidence is just, like, the range operator exists in many languages, as far as I can tell. In all of them that are generic that, like -- in all of them that have a floating point type that accepts floats and does basically exactly the thing that it does here. + +MM: Thatā€™s -- so I accept that itā€™s widespread and that we donā€™t see a lot of complaints about it. But I donā€™t -- I think thatā€™s very weak evidence that people -- that itā€™s not a foot gun, that people arenā€™t stumbling into the trap and occasionally their programs go wrong unnecessarily and we could have saved them that pain. I donā€™t think the evidence speaks the that question one way or the other. Or it provides weak -- sorry, it provides weak evidence that itā€™s not a problem, but it doesnā€™t provide strong evidence. + +KG: I agree that it does not provide strong evidence. I donā€™t feel like this is a place where we ought to be trying to save programmers from themselves. Like, normally I am in favor of that, but, like, this is just so clearly how it works that while it is certainly true that you can trip and fall over it, thatā€™s true for floating point everywhere. And, like, yes, sometimes when you are using floating point numbers, you will have something happen that you didnā€™t expect to happen. But, like, I think that is just -- like, that cost is sunk, that cost is built in. We have committed to floating point numbers in our programs, and so if you trip over it, like, thatā€™s kind of sad, but itā€™s fine. + +JRL: I quick point of order. Jack, you are still sharing your screen. + +JWK: Thank you. I know that. + +JRL: Okay. All right. + +KG: Iā€™ll stop rambling, sorry. + +MM: My bottom line is Iā€™m not willing to go the 2.7 verbally right now in this meeting without more discussion about this question. + +WH: In response to MMā€™s point, if we are going to ban fractions, Iā€™d like to see some evidence that that would be okay. So far I havenā€™t seen any. So Iā€™m worried about the fallout from trying to ban fractions and unsafe integers. + +MM: Okay. I accept that the evidence on all sides of this is weak and that -- + +WH: I wouldnā€™t say itā€™s weak. I think that the evidence on the side of fractions being fine is quite strong given the evidence that has been presented so far with experience in other languages. + +MM: I would like to understand better how to interpret that evidence. Right now I take that to be weak evidence, that itā€™s not actually a problem. + +SYG: Also, NRO points out that I was mistaken, that in fact Python does distinguish between integers and non-integers in a way which is relevant, and their range does not take non-integral numbers. I apologize for making too strong a claim. + +MM: In that case, thatā€™s evidence for answering WH's question, that itā€™s some evidence that itā€™s okay to ban non-integral numbers. + +KG: Okay. + +CDA: Just be mindful we have less than four minutes left. Thereā€™s like five items in the queue. + +KG: For what itā€™s worth, having just checked, Python also bans integral floats, which a distinction that they make. You know, you canā€™t have a range from 0 to, you know, 2.0. You have to leave off the point zero. + +SYG: I am just for clarity, I am happy with BigInts. I was provided a use case by BSH in the Matrix, and I withdraw my weak concern about BigInts. + +CDA: JHD is in the queue,+1 for number with BigInts. DLM supports Stage 2.7. + +DE: Yeah, Iā€™m little concerned about how weā€™re going to conclude on the back and forth here. We understand what the design space is. Itā€™s laid out in front of us. Can we commit to next meeting drawing to a conclusion, if weā€™re not doing 2.7 this meeting? Because, you know, this just between MM and WH. Youā€™re the only two people who expressed really strong opinions. You two have worked together for many years. Can you work this out? + +MM: I donā€™t know -- I mean, Iā€™m fine coming to the next meeting with a conclusion of our discussion. The conclusion might be that WH and I just are in a -- conclude that we disagree, in which case weā€™re stuck. But that is a conclusion. + +DE: Yeah, Iā€™m not sure if we should just accept weā€™re stuck and JavaScript programmers can't use this feature. And I hope you can talk this out and come back with this and figure out recommended next steps. + +WH: I disagree with the characterization that this is just between me and MM and the two of us are the only ones with an opinion on this subject. I think thatā€™s unfair. + +DE: Okay, great. So maybe all the people who have opinions can get together and try to find a proposal by next meeting. + +CDA: Okay. MAH? + +MAH: Yeah, I mean, I was just wondering if there was a way to express the step in another way that so that the algorithm itself would be able to avoid the rounding that may be a problem. So, for example, if you want to increase -- like, you would be able to express like you want to increase by one-third, and say, like, multiply by one and divide by three. + +JWK: I think that might be overcomplicated. I never seen a programming language has that kind of API. + +MAH: : Yeah, just suggesting ways of potentially solving this dilemma here. + +JWK: Okay. Can we have Stage 2.7? + +MM: No, Iā€™m sorry, we canā€™t. + +JWK: Oh, okay. + +WH: I support Stage 2.7. + +JWK: So we can resolve MM's concern and we can bring back this next meeting. Thank you, everyone. + +MM: I should also make clear that this is my only concern. Aside from this issue, Iā€™m completely happy with this proposal going forward. + +JWK: Okay, thank you. + +KG: And we havenā€™t heard any other issues expressed except SYG's concern about BigInts, which has been resolved, and some editorial issues with the spec, which can be resolved by next meeting. So I just want to make sure that everyone is on board with every aspect of this proposal as currently written except for the question of whether or not to accept non-integral number arguments and potentially a question of what to do with numbers past the range of max safe integer, since we didnā€™t talk about that very much. And no other concerns, such that if everyone -- if we come to a conclusion on the issue and next meeting everyone is going to be like, yes, we support this as is, right? + +MM: Yes. + +KG: I want to give everyone a chance to object to that. + +CDA: We will consider KG's analysis there to be agreed upon, unless anybody speaks up in the next, I donā€™t know, 10 seconds. + +DE: I want to suggest that the people who have thoughts here and are willing to engage between this meeting and next meeting, identify themselves so that people can, you know, get together and discuss it. + +MM: Is there any reason not to have the primary discussion just be on issue 64? + +DE: Yeah, if everyone who has opinions can commit to, like, engaging on issue 64, which, you know, presumably you could have come to a conclusion there before the meeting, but now is still a good time, you know, that sounds like a good way to do it. + +MM: Okay, good. + +JWK: Thank you, everyone. + +### Speaker's Summary of Key Points + +- No changes for 1 year, ready for the next stage since Iterator helpers has been shipped in Chrome. + +### Conclusion + +- Wait for the discussion about floating point numbers, and come back next time + +## `Math.sumExact` for stage 2.7 + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-math-sum) +- [slides](https://docs.google.com/presentation/d/1QallvKcuIL2UHALEYnP4AdT8nX_iZ6QHDgpVISUXHMg) + +KG: Iā€™d like to present `Math.sumExact` for Stage 2.7. The repository is up there if you need the link. And just reminder of what this is, my fundamental thesis is that there should be a built-in mechanism for summing a list of values, and if we are doing this, it should be more precise than just naive summation, which hopefully as we are all aware has a problem of accumulating floating point errors and being non-commutative and various other issues. It is in fact practical to be maximally precise, which is to say to specify that the answer is what you would get if you took the floating point numbers, converted them to real numbers, did the arithmetic on real numbers and converted the result to double-precision floats. And so we can be as precise as it is possible to be. The API that I am proposing takes as its only argument an iterable of numbers. And itā€™s spelled `Math.sumExact`. Each of these slides we will go over in more detail, so Iā€™m not going to get into questions around that right now. So I think the most interesting and kind of scariest aspect of this is the question of specifying full precision. And this turns out to be a surprisingly active, I suppose not surprisingly, but an active area of research. When this was discussed either one or two meetings ago, WH pointed out that full precision updation was practical, and since then I implemented it in JavaScript, and proved to my own satisfaction that it is indeed practical. And Python uses the algorithm that WH linked at the time. That algorithm is from Shewchuk 96. And thereā€™s been newer and shinier things since then. I have a link to one here called xsum by its author, building on work called Algorithm 908 if you want to look either of those up. The claim, which I havenā€™t verified it myself, but the claim is that for reasonably sized lists, which is to say, like, at least ten or 100 elements is where the comparison starts being meaningful, itā€™s on the order of two to four times slower than naive summation at the cost of about 500 bytes of memory. For very large lists you can get even closer to naive summation you but need 32K of memory and Iā€™m not assuming people are going to be optimizing for that case. For straightforward case where you use the small accumulator and use 500 bytes of memory and get perhaps four times slower than naive summation, given the overhead of working in JavaScript in the first place and working with numbers and so on, that seems to me to be an acceptable overhead, and the library I linked is open source and Iā€™m not going to name the license off the top of my head because I donā€™t remember it, but it is open source and I suspect it could be used directly instead of re-implementing if you don't want to bother reimplementing. So, yeah, the thesis here is just that full precision is not only practical, but also not that slow. + +KG: So more questions about the design space. You kind of have to take an iterable. We talked about this a while ago. Taking a var-args to match, for example, Math.max just doesnā€™t work because if you have a large array, itā€™s going to blow the stack. The precise size at which you get a range error varies across engines and thatā€™s super annoying. You just canā€™t do var-args. So precedent from `Math.max` is insufficiently compelling, I think, and I think "sumExact" is a sufficiently different name so it is hopefully not that surprising. + +KG: Also it does not do any coercion. Not going to go into this. If any of the inputs are not numbers, then itā€™s a TypeError. Also, I did say numbers, this doesnā€™t take BigInts. + +KG: The question of naming, `sumExact` might confuse people in that they might expect this to be like decimal arithmetic, but itā€™s not. I donā€™t think thereā€™s anything to be done about that. We need some name that conveys this is higher precision and therefore slower. I think `sumExact` is a fine name, except possibly you might like to suggest more clearly that it takes an iterable, and so `sumExactFrom`, for example, `From` is a suffix that we often use on methods to indicate that they are iterable-taking. We could spell this `sumExactFrom` if we wanted to. I leave that to the committee. Currently itā€™s specified as `sumExact` and will default to that unless people would like to change it. + +KG: Then thereā€™s this question of what do you do with an empty list. I think that minus zero is the right answer. We talked about this at our last meeting. Some people said plus zero would be nice just so that you donā€™t have to think about minus zero as much, and I thought about this and I still think thatā€™s wrong. Itā€™s not fatal either way in my opinion. But minus zero really is the floating point addition identity. And itā€™s so rare that this causes problems. People might be a little surprised, but then they can learn an interesting fact about floating point numbers, and their programs wonā€™t be buggy because minus zero and zero are almost indistinguishable. So I think minus zero is the right answer here. + +KG: Last open question is do you stop when you encounter a `NaN` or drain the rest of the iterator? My inclination is to drain the rest of the iterator just for consistency. It also preserves the property that this thing has the same behavior regardless of the order of your inputs, which is a really nice property. I donā€™t want, like, `NaN` and then a string to give you a different result than a string and then `NaN`. Itā€™s really nice to be commutative. My inclination to drain the entire iterator and at first you have a `NaN` and then a billion other things, you have to consume all billion of those things. I donā€™t think itā€™s worth optimizing for that case, especially at the cost of giving up commutativity. If you disagree, get on the queue. + +KG: And then this slide is just to mention Iā€™m not going to write down one of the various full precision algorithms. Itā€™s not practical and this is an editorial question. Weā€™re going to specify it doing the arithmetic in real space, and thatā€™s what I have done here. And this is the full spec text. Itā€™s relatively straightforward. The bulk of it is handling what do if you get negative infinity or positive or `NaN` or zero and minus zero, and then it says do did the arithmetic in real space and points you to do a couple algorithms that allow you to do that without doing real number arithmetic. Okay, I expect thereā€™s a queue, but after that, I am hoping to ask for Stage 2.7. + +MM: Yeah. Two issues. One of which I put in the title. The term exact actually does bother me in that it promises more than it is possible for it to deliver. And Iā€™m always concerned about, you know, nasty surprises on the more naive programmers. What about `sumPrecise`? Do you have a reaction to that? + +KG: Sounds fine. + +MM: Okay. I would certainly prefer it. I think itā€™s more clear about what -- about what itā€™s actually providing. The other thing is when you say specified in terms of doing real arithmetic and then converting the result back to float, is it -- does the algorithm in question actually determine -- actually deliver, you know, the closest floating point number to the correct real number? That seems impossible. + +KG: No, itā€™s totally possible. I mean, thereā€™s a bunch of ways you can do it, and it just does it. + +MM: Really? Okay. + +KG: It's not that hard. Like, just by way of illustration, a way that you could do this - in fact, I have implemented this - is to take your numbers and convert them to, like, 2,000 bit integers, just the full range of doubles - + +MM: Yeah, yeah. No, I understand that. But given that youā€™re using a double precision floating point ALU to do the -- in the algorithm, which Iā€™m sure you do, Iā€™m just surprised this is possible. If you assert that itā€™s possible, and, you know, I believe it, and if so, thatā€™s great. That addresses something that would have been a big concern. + +KG: Yes, itā€™s possible. The trick is generally that you keep a bunch of different floats around to handle different parts of the precision and you do the arithmetic carefully in a way that allows you to handle not just the addition, but the error term in the addition. + +MM: Well, Iā€™m very impressed. + +WH: Yes, to MM's point, I actually worked out the algorithms of how to do this. + +WH: There are two aspects to doing this. One is that you can represent exact mathematical numbers as sums of multiple doubles and preserve the invariant throughout the algorithm that your exact mathematical real number is equal to the sum of one or more doubles. And that works fine. The other fun aspect we worked out is how to deal with intermediate overflows past the double range, and there are very good solutions to that too. So neither of these is an issue. The whole thing is efficient! Iā€™ve reviewed this extensively. I certainly support this for Stage 2.7. + +CDA: JDD? + +JDD: Just matching the other people that are replying. + +CDA: Non-speaking, Iā€™m sorry.+1 for 2.7. Plus 1, precise + +MF: Yeah, support 2.7. I slightly prefer including "from" in the name because there are a lot of APIs that use "from" which all take iterators or iterables, and I think it would be slightly helpful here because we had -- this may be just a problem that affects us because we've waffled about it -- but because we've had this discussion about whether to take var args or an iterable, this helps you remember what it does. I know not everything that takes an iterable is named "from", but I think everything named "from" takes an iterable. So that implication would be there. So I slightly prefer that. + +KG: I would like to hear if anyone else has a opinions on the question of "from", because I slightly prefer to omit it now that we talked about it, but Iā€™m fine just going with Michaelā€™s choice of naming if everyone else is neutral. + +MG: Iā€™m like you. I would like to hear other people support it too. I donā€™t want to be the only one that supports from here. + +CDA: NRO, we cannot hear you. I see you are unmuted, but we cannot hear you. We can still cannot hear you, but prefer without from. Weā€™ll come back to you if you get your audio working. DRR? + +DRR: I guess I think that the "from" suffix is typically more useful when thereā€™s so much more ambiguity. I get that there is some ambiguity that you can take two args, or var args, or whatever. I think here if you just kind of say this is what the API looks like, you may find it just redundant to have the from suffix. So we can use it in other places, but, you know, I I have a very slight preference for not using it here. Itā€™s not a very strong one. + +CDA: EAO says prefer no from. + +WH: I have a very weak preference for omitting "from". + +CDA: All right. Ron? Ron, we cannot hear you. I will keep going in the queue, but we cannot hear you, Ron, so Iā€™ll go to the next. Please rejoin the queue. + +CDA: JHD says omit from the others or producing a new type. This isnā€™t. NRO is back. NRO, I cannot hear you. + +KG: Given that we only have a few minutes left in the time box, Iā€™m just going to take ā€“ the conclusion is we donā€™t do "From", and get the rest of the queue. + +JDA: What do we have, we have 15 minutes left in the time box. + +KG: Oh, for some reason I thought this ended at the top of the hour. In any case -- + +CDA: We didnā€™t start on time. + +KG: I think we can conclude on "from". Iā€™d like to get from the rest of the queue. + +CDA: JDD, plus one for omitting from. Daniel Minor -- Dan Minor. + +DLM: Sure. I just wanted to say the SpiderMonkey team continues to be in favor of this proposal, and although I never thought Iā€™d participate in a naming discussion, I do prefer `sumPrecise` to `sumExact`. Thank you. + +MF: This one I have a slightly stronger preference than the preference I had for "from". Still support 2.7. But I prefer the early exit for NaNs. I know that thereā€™s this inconsistency with whether something would throw or not when weā€™ve reached a NaN either before it or after it. And I donā€™t really care about that. I would -- you know, once we see our first NaN, we can know that the operation will only ever throw or produce NaN, so if I have a very long iterable that yields a lot of values and the first value it yields is NaN, Iā€™d rather just not do that additional work just to see if it can throw. Especially if I know things never would throw. So, yeah, I have a stronger preference about early exiting on NaN here. + +KG: Iā€™d also like to solicit opinions from the committee on that question. + +WH: Just a clarifying question: Does your early exit preference include the case where there is no NaN in the input, but the input contains somewhere a positive infinity and somewhere else a negative infinity? + +MF: How do you know thereā€™s no NaN? + +WH: Okay, the situation is this: Youā€™re scanning down the iterable. Youā€™ve already seen a positive infinity, now you see a negative infinity. Do you do the early exit right away or do you continue? + +MF: When you see a positive infinity, you see a negative infinity, remind me the sum up to that point. + +WH: At that point the sum is going to be NaN, regardless of what else you see. + +MF: Okay, weā€™ve phrased this -- KG has not phrased this the way I phrased it in the issue. The way I phrase it in the issue, when we have reached the NaN state, the sum up that point becomes NaN. We cannot leave the NaN state again. I say any time we enter that NaN state we just exit. + +WH: Okay, I understand your position. What I suggest is we do a poll to see how people feel about this. + +KG: Iā€™d be okay with that. + +??: Yeah, although I would like to hear, I think SYG at least has something to say. + +SYG: Yeah, I found the commutativity thing compelling, because this is going -- this is an API you opt into for precision explicitly over speed. So the -- so the -- yeah, okay, so now that Iā€™m thinking through this, KG, can you motivate the commutativity argument a bit more with a practical example where lack of commutativity -- lack of observable commutativity would bite you somehow, make the code more complicated somehow? + +KG: Yeah, so the -- the reason commutativity is nice in general is it allows you to split up the work and then not worry about how youā€™re splitting it up. You can imagine youā€™re adding items to a queue and summing the result at the end, and if the items are produced by workers that are async, the results come in async, and being communicative in general is a useful property because it means you don't have to care about the order that the results came in. Now if all of your workers are producing numbers, including NaN, then this issue isn't an issue. But if one of your workers messes up somehow and produces a non-integer [sic] value, and a separate worker produces NaN, let's say, then at the end, when you sum up the results, you will get either an error or NaN, depending on the exact order in which the workers added their items to the queue. And that seems undesirable to me, especially because in all other cases, it doesn't matter. You get precisely the same answer. + +WH: KG, you said thereā€™s an issue when somebody ā€œproduces a non-integer valueā€. I think you meant ā€œnon-number valueā€? + +KG: I did. I did mean ā€œnon-numberā€. Sorry. + +KM: Yeah, I think itā€™s possible that even if, you know, you donā€™t exit early for NaNs, that probably wouldnā€™t ā€“ thereā€™s optimization to write in the engines in the common case that wouldn't affect the performance. Like in at least JavaScriptCore, I believe, all of the engines, thereā€™s like a special array backing mode for floating point numbers. And most likely, if you are calling the function you are passing an array of floating point numbers. You hit NaN you could exit early because thereā€™s no like TypeError type things to hit. So I think it would be fine from my perspective, to do either semantic choice. And it wouldnā€™t make a difference to most peopleā€™s performance. + +KG: Thatā€™s a very good point. This is an optimization only for if you are not doing a double backed array. + +CDA: PFC? + +KG: Sorry. It looks like PFC's question is on naming and I want to try to settle this question of handling of NaN. Could we get a quick temperature check? I would like to open the ā€“ the question should be, do you favor early exit for a NaN? If you are supporting this, then you are giving up commutativity in the ā€“ + +CDA: I am going to restart this. There were prepopulated information in the temperature check for some reason. Okay. + +KG: Question is, do you favor an early exit for NaN? + +KG: And we will give people just a minute or so to vote. + +SYG: Can I say a thing that is material to this while people vote + +KG: Yes + +SYG: Thereā€™s `Math.hypot`, it takes var args, but the behavior is that coerces all the args first, and then does an early exit on `Infinity`s and `NaN`s inside the computation loop. That drain the ā€“ more matching drain the iterator, even if in the iterator there is NaN or infinity. + +KG: Okay. It looks like people are done voting. EAO doesnā€™t see the poll, prefer no early error. EAO, can you speak in sorry. I donā€™t know which early error you mean. + +EAO: I meant not erroring and draining the queue. + +KG: NaN isnā€™t an error. Itā€™s just stopping. Itā€™s an early exit, not an early error. + +EAO: Sorry. Thatā€™s what I meant. Yes. + +KG: Okay. Okay. All right. Well, it looks like people are no longer voting and 8 people unconvinced of the change, five in favor. As champions, since I need to make a decision, I am going to stick with the current semantics of draining the entire iterator so we can be done with it. MF, unless you really object or anyone else really objects to going forward with this proposal, with the draining iterator behavior, I will consider this question closed and move on. + +MF: No. I do not object to it. + +KG: great. And then PFC was next in the queue + +CDA: A couple minutes left + +PFC: My assumption is that most developers donā€™t care about floating point numbers arithmetic and donā€™t understand it very well. And I donā€™t think thatā€™s something that when it comes to the naming we can necessarily just sort of shrug and say, ā€˜what can you do?ā€™ Because if the name is `sumExact`, then I am pretty sure that people will say, ā€˜well if itā€™s exact, why didnā€™t they solve the 0.1 + 0.2 = 0.3 problem?ā€™ I like MM's suggestion of `sumPrecise` better than `sumExact`. I canā€™t exactly explain why that feels different to me. But I would like to suggest that you consider other names that donā€™t imply to developers who donā€™t care about floating point, that this is somehow not floating point. Maybe `fsum`, like Python, I donā€™t have a good suggestion. But I think `sumExact` is going to cause confusion. + +KG: Yeah. So SYG actually suggested `sumSlow`. I spent a while thinking about the names. And the conclusion that I came to is that I really want the same to suggest not just performance, but something about the behavior. If we are not doing math.sum, and there's good arguments not to, I want a name that suggests that this isnā€™t just summation. And tells you something about how it sums. "fsum" is just like, this is a sum but different somehow. I want it to be informative, and if it is about the precision, we run into this problem. We are going to necessarily be conveying that this is more precise than summation. And yeah, so I wonā€™t see a way to avoid this problem. Or at least with a name I considered acceptable. sumPrecise is totally fine by me. + +PFC: Itā€™s a fundamental mismatch of viewpoints and context. The way weā€™re considering it in this room is ā€˜more precise than naive summationā€™ and the way I think most developers in the world consider it is, ā€˜less precise than I expect because I donā€™t care about floating point.ā€™ + +CDA: We are at time + +KG: I would like to ask for Stage 2.7 with the name sumPrecise with otherwise the semantics that I have presented. I donā€™t think I am going to come up with a better name. We have heard a number of comments of explicit support. + +MF: Is it okay if you repeat each of the decision points that you presented? Which side of the decision they are on. + +KG: Sticking with full precision. We are still taking iterables. Not coercing anything. The name will be `sumPrecise` rather than `sumExact`. No "from" suffix. Empty list will be negative 0. We will not early exit on NaN but drain the entire iterator. And we will have the text spec as written. + +CDA: In the queue we have + 1 from JDD,. + 1 from HAX. 1 + from MM. EAO. DE. MF with a fractional repeating support. +1 WH. Sounds like you have 2.7. + +### Conclusion + +- Stage 2.7 under the name "sumPrecise" +- other items remain as presented: takes an iterable, does not coerce, gives -0 on empty list, drains the whole iterator even in the case of an early `NaN`, spec text given using real-number arithmetic + +## Discussing new directions for R&T + +Presenter: Ashley Claymore (ACE) + +- [proposal](https://github.com/tc39/proposal-record-tuple) +- [slides](https://docs.google.com/presentation/d/1JfChmW8tQ2_mrFDynosNqa1tjJ2j-qX6WoKm8vc_tkY/) +- [related repo](https://github.com/acutmore/proposal-keyby) + +ACE: All right. Records & Tuples is a proposal going on since Stage 1 in 2019. And we spent a lot of time getting into a lot of details, and I last presented at the end of 2022. And that was kind of the last ā€“ it was like a last-ditch attempt at trying to see if we can proceed with the semantics that we had, with the proposal. And we got some feedback on the fundamental parts of the design. So we got some feedback on core parts of the proposal, which it was hard to hear at the time, because we had been going deep into the niche parts of the proposal using plenary time to talk about things like `Symbol.toStringTag` and things like this when there was really fundamental things that we should have been actually focussing our attention on and got sidetracked by more niche things. + +ACE: But what I want to do today is to just kind of step back to the beginning, like is this still a problem we want to solve as a committee? I assume yes. Do we agree on the core things before I deep dive back into energy into the proposal and back into the small details. Thatā€™s the intention of today. + +ACE: So some of the push back we got which is, you know, it was mentioned at the beginning of the proposal and one was the fact that record & tuple were permitted, and really this was one ā€“ implementation complexity for adding primitives to language. Itā€™s not on the ā€“ itā€™s difficult, on the hard side. It wasnā€™t just about that, but also the ergonomics for using these things as a developer. And thatā€™s because these things create a very hard split. Even though they look like object literals, they were different. They cooperate reference objects. Reference of primitives. And the ā€“ these things like object literals didnā€™t have an object ā€“ didnā€™t extend from the object prototype records couldnā€™t have single keys. Tuples were not arrays. They were all the things when you started to look at them, you realize these things are very different, not objects and arrays. + +ACE: The other thing was closely related to primitives, but separable, was the `===` equality ā€“ and again, like, this is more to do with the implementation side. This is complex. But also, the performance developers would get from this. And the two kind of strategies that could be used to implement. One is defer all the work until an equality opportunity is performed. And thatā€™s kind of out of spec for it. And we go to the same value in the spec and say these things are equal. Whereas in reality, that ā€“ when you implement that isnā€™t isnā€™t one lane of code in the engine that says equality, maybe there is, the equality isnā€™t defined in one place, but in lots of places. And hot paths for things. For some engines, this could make things slower, for all comparisons, because whenever comparing objects, they need to load the object, like to an indirect load of that address, to work out what type of operation they are going to do. Whereas, that current engine, I will skip that. And thereā€™s the ā€“ even though we had developers to expect linear time comparison of these things, they kept saying back to us like, sure youā€™re saying that. "But really, itā€™s O(1), right? This is optimized." "No, no. This is probably always going to be a linear time operation." So it does seem like for `===` developers have an expectation of a lot of performance and the engines werenā€™t comfortable with that expectation. The other strategy is to do this at creation time. When you create these things, look them up in a global interning table if one exists, match, use the existing one. The downsides here are the other benefit of record & tuple is they are immutable. And there is a convenient way to do this. People might use them purely for those semantics but paying the cost for looking them up and not utilizing these equality semantics. Or, as we know, the hypothesis is that most objects that young, maybe they are just creating these things, doing a one time look up in the map and dropping it. So putting in a global table and just immediately drop them. The other issue is negative 0. Which just kept coming up a lot. If you are interning to make `===` work, either `-0` is not equal to `0` or they are equal but you have to normalize, replacing the `-0` with `0`. + +ACE: So that was the push back. Even if you take those things kind of out of the proposal as it were, thereā€™s a lot of value there and potentially things we could still find, design on them in the committee, like adding value and solving a problem. So one is, these things we are adding, very ergonomic way to create immutable data structures. And I think that is still of value, and perhaps the previous proposal went as ā€“ enforced deep immutability, maybe that was going too far because it can limit what you can put in the data structures. And enough value add, if itā€™s only kind of adding a shallow immutability like an Object.freeze. If you are adding these immutable thing, itā€™s perfect to also introduce equality semantics. If you have got the immutable things in an immutable language, they are so useful. If things are immutable the equality canā€™t change and we will talk about that later. + +ACE: This is kind of not the exact case, but when you ask people how they wanted to use this proposal or what they were excited about, one of the use cases that kept coming up was this composite set key. They just kind of worked perfectly there. I can create a Set, put these things in it. Test them. Loop over them. And it does what you would expect from other languages that have this as a feature So if we try to build that example using some other proposals in space, also from 2019, one of them was collection normalization from Bradley. So that you could change, construct that, or maps, and apply this normalization function whenever data came in. So perhaps where edgeā€™s example, we can take the kind of array of edges and just join them to create this string. This would now create letters due to the middle, but add a value in and I could ask if the value is there and get true. The downside of this normalization is, I now have a set of strings. When I iterate over them, I am iterating over strings. I dropped down a layer of abstraction. I have lost data that was there. + +ACE: So I do like the normalization proposal. A variation I prefer is thing like MFā€™s iterator proposal, where you provide a function, which is telling us how we should normalize the thing, but to determine if itā€™s unique. It doesnā€™t change the resulting value. Iterator unique proposal, I am not filtering than I am mapping. + +ACE: So instead, you could have a `Set`, where you instead give a `uniqueBy` function. So now, the `add` and `has` work as expected. Those I can iterate over the `Set`. And I still have the original unflattened value. + +ACE: So this one was ā€“ this is good until the point in ā€“ we are still limited in how to determine if things are unique. And thatā€™s to produce things that are currently the same value, zero. So perhaps I would like flatten these things into the string and make sure that I am handling edge cases here, and dash to join, but now thereā€™s edge cases where I can get results I wouldnā€™t want to be equal. so the thing that I tend to see code doing today when trying to normalize, they stringify things. It works and is very easy to reach and it works for a lot of cases. But as we all know, `JSON.stringify` is not perfect because JSON canā€™t represent everything. There's issues with key ordering and cycles. Also, itā€™s not the most efficient thing to be doing if you have got a large structure, taking all the data in the structure and all the strings and concatenating them and getting them into one massive string isnā€™t the most efficient thing to be doing. So the proposal back in 2019 were aware of this. And they had a kind of API for creating composite value in a more structured way and the value-preserving way. So here I can create a key with objects and BigInts and strings. If I created the same key, which is like a vector, then they would be triple equal to each other. + +So this would solve that problem. I could now put my kind of edge identifiers in. And I am going to get that composite key. Now thereā€™s a new error. They only have primitive values in them. Non-bearing clarifies time values. And the reason is, the way the things are implemented is with a trie. Where you have a trie of maps, and then all the objects you use a WeakMap, until the primitives you use a regular map. And this allows the key to not just leak forever: if one of the objects gets collected, then that allows them to be collected and then all maps downstream get collected and that lets us clean up this data structure. There is an alternative design of this, where, instead you do the same thing, but you also donā€™t hold on to the key strongly as well. Because one issue of the previous design is, say, one of the objects is a long object, it might be never collected, but maybe this is some different module that will live the lifetime of the application. Yes, we have given the key a lifetime, but that effectively the own application in its perspective. + +So optimization of this is a variation, I guess, if you hold on to the key weakly and then you can use a finalization registry, you can use that to work back up through the maps. And kind of clean up from that direction. + +This is not effective either because finalization call backs donā€™t necessarily come as frequently as you like. You have to yield, you might have to wait for a full like the [inaudible] collection. + +And that means that the key is being held weakly by the table. If they use keys, holding it weakly, then it can drop, if they are trying to put it in a WeakMap or a WeakRef, then now no one is holding on to it strongly. And so really, that doesnā€™t have the kind of consistent semantics that people are expecting of the keys. Because they can recreate them in the future, WeakMaps and WeakRefs should be holding them. You almost need to do the opposite, intercept the weak keys, into a weak collection and hold them to them strongly. + +But ā€“ so when it comes to the keys, you need the object part of them. Where someone is going to use them, they are not sure which object they should use. Nothing object, about the values they are creating. So maybe they decide to use the set that they are contained in, they think the keys donā€™t need the set and this now kind of works except this composite key is used in the first design, that finalization registry, the keys, lifetime is tied to the lifetime of this set. So now effectively leaking tons of memory. + +ACE: All these things work in the language and user land, to do this, itā€™s ā€“ the keys are annoying that you you have put an object in there. One is not optimize, the keys are short lived, a lot of global data, potentially a big pay back. And I feel like itā€™s easy to have memory leaks. And this is perhaps one of the hardest things there is to do in JavaScript applications. And these things are happening on someoneā€™s website. You canā€™t phone them up and tell them "give me a heap snapshot". So anything to avoid snapshots is always my preferred design. + +ACE: I am wondering if this perhaps disadvantages this approach: triple equal same value semantics of what the language has, and JavaScript already has 4 different equality semantics so itā€™s nice to use one of them. I donā€™t think itā€™s in user-land. In composite, maybe the engines can correct me, but I feel they could do a better job than user-land, merging some of the maps together because they have more access to the hash values, but I think they will hit some of the garbage collection complexity even with that lower-level access that they have. + +ACE: So I am wondering how people feel about a different approach? Where we have records & tuples, but this time they are objects, and maybe tuples can be arrays this time. Instead, they are just unique objects created. And there is an operation to request if they are equal. I imagine something like this: two empty ones (`#{}`) are equal. 0 and negative zero could be equal. It doesnā€™t matter if they are in order, but they can be deeply called with tuples inside records and vice versa. + +ACE: And essentially, we still allow like other things, object literals like custom prototypes. As long as the custom prototype is the same, maybe thatā€™s fine. Thatā€™s whatever getting into the weeds, but imaginingā€¦ but we now do ā€“ there was a reason tuples were primitives that enforced equality before, was to avoid this kind of thing. But the downside in terms of the limitations in what you could support, so maybe and this is a lot of how the other languages will work, if you use a value that canā€™t be compared, then you drop back to differential equality. So this would be false. + +ACE: So this has advantages. Like, these values, like objects and arrays, you can use these in your application. You donā€™t need to drop over to a composite key to create them when you need them. You can use these things and they just happen to also have this equality semantics. You donā€™t have any life of the 0 issue. You can just ā€“ you can have negative zero. We donā€™t normalize it, but it is equal. You donā€™t have to worry about the key order. Nothing primitives. We donā€™t have to worry, being triple equal or symbol keys. You can allow symbol keys and itā€™s defined by the values, there isnā€™t a usable hookable thing. The equality never changes and no user hook. You donā€™t have to worry about running. And they have like usual match properties that you would want, reflexive. So if these are true, I think it would be really cool if these worked and it might seem like no, no, this is all wrong. The same set is zero. Objects are by difference. That is shocking to us, I think people theoretically JavaScript will be around for a long time, and thereā€™s more JavaScript that donā€™t exist that, than exist. I think people coming new to this language, or from other languages, even existing, myself included, after coming out of the initial shock of changing something from the mental to the language, this works really nicely. + +ACE: We can just use the existing Set, but now these are the new values that are primitives, objects. They have a much more practical and usable equality semantics. So perhaps thatā€™s too much and we donā€™t want to change part of the language, since we need to introduce like a new global constructor. Maybe thereā€™s a new type of Set. I would like ā€“ I can see the point in doing this. I donā€™t love it, but I can see the point. Or maybe thereā€™s a different way of going about it. Personally, I didnā€™t want to opt-in, but I can imagine. My guess is, I would much prefer an opt-in approach. + +ACE: I can also imagine if you didnā€™t delve into these things, maybe thereā€™s still a like of a new place where they just work out of the box, where you donā€™t have to say something different. But if you had the uniqueBy hook, they work and have the semantics you expect, the R&T and the like are unique. One thing about this is, it would seem like or find all the places that currently have the same values and replace them with this new thing. And I think that may not work. Because that would suggest we also change our array includes to use this. And about maybe we do to be consistent with all the same values zero things. But then that pulls apart array index of and array includes. Already a little bit different from each other. But this would make them more different from each other. And I donā€™t think we have the appetite for changing array index of to make this also use these things. So array should stay as it is, would be my gut feeling. + +ACE: So I guess what I am asking the committee is, how do you feel about this? Is this silly? Too extreme? Not enough? I would love to hear. People have been attending the record & tuple at a monthly call and chatting on matrix. But I would love to hear from those people and also other people on the subject, and I have been thinking about it for a few years. And I kind of have no idea how we feel about it. Iā€™ve been dreaming about potential possibilities. + +ACE: And the elephant in the room here is "hash and equals" because surely this is what every other language does. And I think thatā€™s completely fair and correct. And I think hash/equals is great. But this is what happens under the hood, an optimal way of doing this. This is a separate question. Either way, itā€™s about doing something like a new form of equality. Itā€™s not triple equal. This is I guess likeā€¦ a subpart of the question. Maybe the new R&T things do actually use the hash code and they build in things when you define them, you get the kind of hash and equal symbol protocols, but then that still suggests you need a new constructor and a new set, but a new world where those things work. Itā€™s possible to get these things working and a default set and iterator method, pretty much more excited by that. But I am interested in this tuple world. I also feel like thereā€™s value in ā€“ I can see the value in exposing the APIs. But I think theyā€™re slightly more niched. But like the performance cases where you really need this lower access and I have a feeling that whatever direction we go in, the main thing about it, I am not going to initially want to do is, be writing themselves, but wanting to work on a higher up abstraction layer. So I would be curious what people think. Thereā€™s stuff in the queue. + +KG: Yeah. I put this in and I didnā€™t understand what was proposed, but that was halfway through, and now I understand better now. Thereā€™s one corner I am not really clear on. So you put one of the objects in a set. And then you make another object that is distinct from the first one and you say, does the set have this? The answer is yes. And then you mutate your original one. And then you look up a second one again. You say, does the set have this? And the answer is no. No, that is correct? + +ACE: How do you mutate them? They are immutable. + +KG: Okay. Yeah. Immutable, so thatā€™s not a problem. Okay. + +ACE: It is a big part of this proposal. ā€“ if these things were not immutable, none of this makes sense. Like, the fact they are immutable is why I think they have good equality semantics. + +KG: Okay. I think I understand. I have opinions, but I will get on the queue to have those. + +WH: Could these things support order comparison or less-than semantics? + +ACE: Yeah. yes, is the short answer - in the previous primitive proposal and we left it as kind of a `valueOf` on tuples would throw to leave that space open. There are certain use cases when people want to sort an array by like 2 orders of sorting. Sort first by this property and sort by this property. And this would be leaving out the most optimized way of surprising that. But a very clean way of expressing that. So I think yes, they did, but previously we decided to not open that can of worms to keep the proposal smaller. + +WH: Okay. + +JHD: As I have expressed in the record & tuple meetings, I donā€™t think that the proposal the current/previous version carries its weight at all without ===, which implies it should be a primitive. So I think that itā€™s ā€“ it is a good thing youā€™re looking for other ways to solve some of the prosecutors because this method, I donā€™t think has ended up being viable because of the implementor concerns that prevent new primitives or loafer loading triple semantics. I like the idea of hooking into collections. Thereā€™s an existing proposal that tries to do this and thereā€™s ā€“ I think it is a very ā€“ it sidesteps a lot of the issues that tend to have with customization and subclassing when you make an instance of something, provide a few hooks to make it behave differently. And then everything else just works. So I really like that direction. + +ACE: Yeah. I do like the idea of hooks. The thing with hooks as the only way to do this -- I think hooks get ā€“ they do ā€“ they slightly complicate things when you use the new `Set` methods. Like what happens if I intersect a Set with another Set? But I have different hooks. I guess the preference would be the left-hand side, the receiver. So we can define what should happen. But it might be surprising to people and may not be taking those things into account when they are using these things. Is I donā€™t think the hook are like a perfect answer. + +JHD: Yeah. Youā€™re completely right. The new set methods that take an argument and the receiver and operate on both of them, we have to come up with an answer and not a silver bullet but almost a solution for everything else I heard of. So I think thatā€™s a solvable problem if we go that direction. + +ACE: Thanks + +ACE: And RBN. I do feel bad as I know you wanted to talk with hash equals and I have forced your hand and I apologize. + +RBN: Hash equals is something I have been thinking about for a number of years. I have brought it up in multiple conversations, and in discussions in the past. I think the only reason I havenā€™t presented it in proposal is that my current workload related to TC39 proposals is over limit. However, a lot of things we've been talking about in the matrix, especially, have been around like the different a equals+hash type approach versus something like composite key. And there are specific things that composite key enables, but a number of things that it doesnā€™t really ā€“ itā€™s not the most efficient mechanism for handling things like running case and variant collections or trying to compose things that require ā€“ sorry. Like, building custom hash table, I have had recently the need to do such a thing, especially working with experiments with the structs proposal, any lack of concurrent hash table or concurrent map as part of the proposal, and likely not part of that proposal in the future, sometimes itā€™s necessary to actually be able to generate a hash table to get the calculations and there are libraries and packages that do that out in the ecosystem. A lot arenā€™t as efficient in the browser because they require using a WeakMap to assign some identity to an object to use it as a pseudo-hash. But having the ability to generate a hash and do comparisons is much more efficient than something like a composite key right off the bat. Composite keys requires allocations, every time you do a get or set, you wrap a [value?] just to represent the key. Where if you had equals/hash, you can do that in one place. You donā€™t have the overhead for every single `get()` or `set()`. So thereā€™s a lot more efficiency, I think you can find something like equals/hash and itā€™s consistent throughout other languages. And even the native implementations of JavaScript in most cases, at least every one that I looked at so far, uses equals hash natively. So itā€™s already doing these things. So thereā€™s value in giving that power to the users because itā€™s not a terribly complicated feature to be able to leverage. + +ACE: Yeah. As I said, I 100% view hash and equals to be useful. Effectively, itā€™s hash equals is still there. But these ā€“ itā€™s still below the line. You create these things internally, the engines knows the hashcode and compares them to decide if they are equal. But thatā€™s ā€“ as I am presenting it, I am not making that something where the user can say, this is hash a just code for this value. I can agree. I can see ā€“ especially with the shared structs proposal, those things are mutable, so they wouldnā€™t have the built in hash equals, but thatā€™s where perhaps you want your thing, you want the drop downlevel, and what I am saying is, I think a case of equal, wanting to be writing hash equals themselves, a less common case. And the more common case, especially for languages ā€“ a lot of languages have these things because itā€™s traditional to have them. But they also tend to provide data types where you get these for free. You can write a data or record class and it automatically implements a.hashcode and equals operator, but I donā€™t have to write myself and remember to update as I add new fills to the object. + +RBN: Yeah. I do agree that most code doesnā€™t need to write this themselves. And places like .NET framework, most comparison and equality objects that you would use like a comparator, IComparator or the like ā€“ already implement it. You will use case insensitive string comparator. But you have the level of flexibility you need to get in and write the hashing algorithm or write how you want to take various properties on an object and hash them and get shift temperature together to create a validā€¦ and that level of flexibility is something valuable in those cases. + +ACE: Yeah. I agree. I want to get through the queue. Thanks, RBN. + +WH: RBN, I'm trying to understand your position. Are you suggesting that ECMAScript implementations expose hash values for all the objects? + +RBN: The proposal I have not presented that I have considered would be that the ECMAScript language have something like an `Object.hash` static method to pass anything in, you get a hashcode back or integer, integer value. Whatever it is, it produces a 32-bit integer. For objects, you could use identity hash that doesnā€™t correlate to its actual physical memory address, just to guarantee there is a unique identity for each one. For something like strings, itā€™s possible that you could implement or use a specific string hashing algorithm like xxHash32 or xxHash64. The built in hash cold function is useful for getting hash colds for native values. If you are building a structured object like if I am building a point class or a ā€“ even a composite key class, I might have a comparator that says, well if this value is a composite key, I can get the.hashcode of the first and second element and I know how to do the bit math and do bit shift to get good avalanche properties and thatā€™s the value that I return and control what things get returned and how the equality is compared. Itā€™s a combination of like APIs provided by the runtime and the ability to hook equals and hash on something like map and set + +WH: Are those hashcodes portable across implementations or implementation-specific? + +RBN: Not portable. They are not designed to be serialized. Even in languages like ā€“ something like the .NET Framework. They use randomized comparators that are unique to the specific thread, so youā€™re guaranteed to get the same hashcode within that thread. Maybe not just in the same thread ā€“ maybe the same AppDomain. So all threads within the application. But it will not produce the same values every time you start the application. And itā€™s not designed to be serializable or guaranteed from one app restart to the next. + +WH: Okay. Thank you. + +CDA: Noting, we have limited time left and a fairly long queue. SYG is next. + +SYG: This is a narrower thing about the ā€“ using the composite objects inside WeakMaps. I am not sure I caught the position that you were ā€“ that you favor. Is ā€“ I heard something like, when they are put into WeakMaps, then you ā€“ they would need to counterintuitively hold these things strongly instead of weakly. Isnā€™t this exactly analogous to why we ā€“ what why we have this canBeHeldWeakly predicate for things? Is your ā€“ is the argument we can break from ā€“ allow these things in WeakMaps because they are not triple equals, but they have this like separate notion? + +ACE: I guess my ā€“ the main thing I am trying to solve for is, like, GC complexity. Thatā€™s the feedback we got for R&T. + +SYG: Okay. + +ACE: But then ā€“ to answer your question, there are lots of different ways we can define how these ā€“ these work in WeakMaps. Like either ā€“ like, either we start to ā€“ either way say that ā€“ in like when we talked about these with tuples, it would be that if the key, like the key itself doesnā€™t have a lifetime. Like its lifetime is just ā€“ itā€™s the kind of the shortest lifetime inside it. When one of the objects, one of the lifetime aspects of it, sets, ignore symbols. Easier when you say objects. It assumes one of the objects in the key isnā€™t reachable. And that means the key canā€™t be forged. And then now the WeakMap would be free to drop that. So all we say, these things are just objects. Like, the value ā€“ they behave no differently to any other object. They donā€™t exist when it was triple equal, in the composite key case, when triple equal, you donā€™t want to do that because these things can be recreated. Thatā€™s the thing I am trying to ā€“ + +DE: We might want to ask: what should be in a basic version of record & tuples? The basic version, record & tuple cannot be WeakMap keys or WeakSet keys. The basic version for simplicity would be without this. Is that right? + +ACE: That is my preferred option. JHD is against any objects. You canā€™t put objects in these things. But I think the best option would be not to allow them to begin with. I do think using those things is an advanced collection and changing semantics doesnā€™t have that risk. Changing the string constructor would scare every developer using weak collections is more advanced. We should be more comfortable with making changes that more experienced developers could understand is my personal reference. + +MAH: I will get to the changing semantics later, if we get to it. But I have use cases that would really benefit from being able to use composite WeakMap keys. So I would really like this out of this proposal. + +ACE: Okay. The thing though; the ā€“ I like that you have use cases for it. But the thing about it is, it doesnā€™t need the engine support. Itā€™s only more as an optimization. Like we both created things where you can implement this in user-land by doing this ā€“ you read out the values and put them in the weak, trie WeakMaps. + +MAH: Tries are just a pain to implement. Sure. + +ACE: The use cases are, if the engine does it, itā€™s because itā€™s more efficient than a capability thing. + +EAO: We at Mozilla generally like this direction for the proposal. But we do have the sense that was being proposed or talked about here is not quite the solution to be accepted for Stage 2. We are noting that this sounds like a really really good stage 1 conversation and development. + +ACE: Yeah. I feel really great. I wouldnā€™t just take assumption and just pick up from Stage 2. Butā€¦ I complete agree. + +CDA: We are technically at the end of plenary for the day. However, this is a discussion item, so I can stay on for a bit, if folks would like to continue going through the queue. + +KG: Sure. I just wanted to express that the "composite key" that you presented, I prefer that over changing the behavior of collections or having a new kind of collection. It just plays nicer with other things like `Array.prototype.includes` or just comparing it against a bunch of values or whatever. Itā€™s not an extremely strong preference, but a mild preference. And separately, if these are producing objects - and they are, no matter what we do here - I would kind of prefer not to use any syntax for it. It doesnā€™t seem necessary. And itā€™s a lot of syntactic space to use for a somewhat more obscure thing. + +ACE: Thanks, KG. + +DMM: This feels like itā€™s a lot of the side of the records [inaudible] and more widely value side have in all the [inaudible]. I do change semantics. I think one of the important things is to ensure that things like set prototype and array prototype includes should behave the seam way. Because that is what we already ā€“ that is what uses already [inaudible] change. If we start changing the semantics of collections, you have to do array methods to match. I very much think that, you know, RBN was ā€“ I very much agree with RBN on the equals and hash being a good way forward for this. And I think we can preserve the semantics through existing objects. And then it only be affecting things for the new record-type things, which automatically declare equality and a hash, numbers, and that sort of thing. But that is a useful way for the language to resolve and open up a lot of [inaudible]. + +ACE: Okay. As I said, like, I donā€™t like the idea of `includes()` and `has()` not matching. But it does ā€“ the reason was, if they match `includes()` now becomes even more unlike `indexOf()`. Which I think a lot of JavaScript developers arenā€™t aware of today. You think it surprises them when you show them that `indexOf()` and `includes()` give a different result. We can avoid this problem by not having the ergonomics of using, changing existing `Map` and `Set` and introducing new map and set to people. So I think either way, developers have more things to learn, either they learn thatā€™s things, behave differently, or learn thereā€™s a whole new collection type they need to use if they want to use these semantics. The advantage of these ā€“ these are the same value, then you have all the garage collection issues. + +ACE: MAH? + +MAH: Yeah. I really like the idea of basically using record & tuple as a structured composite keys. I think that makes them directly usable. My concern with the direction right now is more about the suggestion to change the semantics of existing collections without explicit opt-in. I think it might have an unintended consequences. Like, and I probably would much prefer an options bag or a separate constructor. I think option in the constructor might be sufficient. But changing ā€“ and I think if you do that, at that point, there is less of a problem with `includes()` not matching because at that point, you are explicitly opting into your `Set` having different semantics. + +CDA: Thanks, MAH. Thatā€™s it for the queue. + +ACE: Thanks, everyone. I will go read all the other conversations in matrix that I couldnā€™t read. + +CDA: Okay. That brings us to the end of Day 2. We will see you all tomorrow. + +### Speaker's Summary of Key Points + +- We covered the feedback received on the 2022 design of the proposal +- A new design that does not include new primitives or overloading `===` semantics was presented +- We talked about how this compares to a 'hash+equals' pattern + +### Conclusion + +- Lots of good feedback was received, with contrasting views in the committee. +- No conclusions were drawn. diff --git a/meetings/2024-04/april-10.md b/meetings/2024-04/april-10.md new file mode 100644 index 00000000..b22775c4 --- /dev/null +++ b/meetings/2024-04/april-10.md @@ -0,0 +1,1117 @@ +# 10th April 2024 101st TC39 Meeting + +----- + +Delegates: re-use your existing abbreviations! If youā€™re a new delegate and donā€™t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +You can find Abbreviations in delegates.txt + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|-----------------| +| Jesse Alama | JMN | Igalia | +| Ujjwal Sharma | USA | Igalia | +| Waldemar Horwat | WH | Invited Expert | +| Daniel Minor | DLM | Mozilla | +| Ron Buckton | RBN | Microsoft | +| Eemeli Aro | EAO | Mozilla | +| Duncan MacGregor | DMM | ServiceNow | +| Linus Groh | LGH | Bloomberg | +| Jason Williams | JWS | Bloomberg | +| Ashley Claymore | ACE | Bloomberg | +| Chris de Almeida | CDA | IBM | +| Keith Miller | KM | Apple | +| Samina Husain | SHN | Ecma | +| Bradford Smith | BSH | Google | +| ZiJianLiu | LZJ | Alibaba | +| Ross Kirsling | RKG | Sony | +| Ben Allen | BAN | Igalia | +| Jordan Harband | JHD | HeroDevs | +| NicolĆ² Ribaudo | NRO | Igalia | +| Anthony Bullard | ABU | ServiceNow | +| Samina Husain | SHN | Ecma | +| Mikhail Barash | MBH | Univ. of Bergen | +| Istvan Sebestyen | IS | Ecma | + +## Reality and spec differ on property key resolution timing for o[p] = f() + +Presenter: Ross Kirsling (RKG) + +- [issue](https://github.com/tc39/ecma262/issues/3295) +- [PR](https://github.com/tc39/ecma262/pull/3307) + +RKG: Hello, everyone. It has been some time since I have presented a web reality bug, but I found another, and I will share it with all of you here. + +RKG: So this oneā€™s a fun one. It does not involve any recent features of any kind. It is a difference between reality and spec on the property key resolution timing for o[p] = f(). So specifically we have this test -- itā€™s actually a bit misleading. We have this test262 test, and it claims to check that the assignment operator evaluates its operands from left to right. Indeed, we would want that to be the case, but this test is not passed by any browser hosted engines. In specific, we see this. So arbitrary object o, p is an object with a `toString` method, itā€™s going to print out P, and f is just some arbitrary function thatā€™s going to print F, and we see from JavaScriptCore, SpiderMonkey, and V8 that F is printed before P. + +RKG: And it would not seem very good if we have jeopardized, you know, evaluation of subexpressions from left to right. In fact, this isnā€™t jeopardized because the issue isnā€™t order in which subexpressions are evaluated, the issue is the timing of the property key resolution. If we complicate this ever so slightly and make it so that the subscript is itself a function here, so weā€™ll print ā€œeval LHS (left-hand-side) subscriptā€ when we actually evaluate the subexpression there, and weā€™ll now print ā€œeval RHSā€ from our function f and toString will say ā€œresolve property keyā€. Now we get things in an order that kind of makes some sense. We see eval LHS subscript is the very first thing. Eval RHS and then finally resolve property key, and the reason for this is from the engine perspective, ToPropertyKey basically belongs to GetValue and PutValue. + +RKG: But the spec expects that it will be performed as part of LHS evaluation, as part of actually looking at that o[p]. For comparison, if we turn this into compound assignment, not just plain old equals, but plus equals, we will see ā€“ actually, if we look at V8, weā€™ll see a very naive evaluation of this, where we first do LHS evaluation, then we have our get and weā€™ll resolve property key once, then weā€™ll evaluate the RHS, then we have the put, and if weā€™re naive about how weā€™re doing things, weā€™ll actually double resolve the property key. This is an engine bugā€”JSC just had this bug which I just fixed, SpiderMonkey had already addressed this, V8 will deal with it at some pointā€”but the point of the matter is from an engine perspective, well, what we would do here is just perform ToPropertyKey up front so we can hand over a pre-resolved property key to our get value and put value operations. We actually also have to front RequireObjectCoercible, but basically, for compound assignment, reality and spec are in alignment because, thankfully, in that case, GetValue actually precedes RHS evaluation. This is not the case for basic assignment where there is no GetValue whatsoever. + +RKG: So to summarize the issue, when we encounter o[p] = f(), effectively the spec sees put as a two-place operation where we first want to take our o and p and resolve them to the exact place that weā€™re going to put the thing, and then weā€™re going to also determine the value, and having fully resolved both of those pieces, weā€™ll hand it off to put. And that is not what browser-hosted engines do. They view put as a three place operation: we have our object, we have the subscript, and we have the resolved value to setā€”so weā€™ve resolved the RHS value, and then weā€™ll worry about putting o and p together later. + +Neither of these are what I would call unintuitive. + +RKG: I wanted to first check in this GitHub issue thread (https://github.com/tc39/ecma262/issues/3295), whether or not wanted to sort of ā€œpoke the webā€, if you will, to see whether the web actually might be compatible with changing this, whether we would want to test whether this edge case would be hit. And so overall consensus from V8 and SpiderMonkey and I guess myself as a person who works on JSC was that we werenā€™t super keen on doing this, so I have a normative PR here, which I have conferred with the editor group on, to make sure that weā€™re doing things in a, you know, acceptable manner. Itā€™s not a huge PR, but it was not necessarily as trivial as youā€™d want it to be in the sense that it requires that we change a bit of the definition of what a ReferenceRecord is in order to loosen this [[ReferencedName]]. [[ReferencedName]] would now be allowed to temporarily be an ECMAScript language value other than a string or symbol, in the case of a property reference for which ToPropertyKey has yet to be performed. So specifically, If we go to EvaluatePropertyAccessWithExpressionKey, we will no longer perform ToPropertyKey at that moment. We have replaced it with a note saying ā€œin most cases, weā€™ll do it immediately afterā€. But instead, in places like get value and put value (thereā€™s one or two other places too) we will basically resolve the property key just when itā€™s necessary, which will basically serve to align spec with reality. + +RKG: So, yes, to summarize, I think we have alignment from implementers, as well as the editor group on this. I think the only possibility for pushback would be to demand that we actually confirm whether our not doing this would be web compatible, but I would like to request that we do this. So weā€™ll go to the queue. + +USA: Yeah, first on the queue we have KG. + +KG: Yeah, Iā€™m in favor of making this change. I think that having the spec in alignment with reality is a good property, and we are not going to bring them into alignment by making a change in the other action. So this is a good change. Also thank you, RKG, for doing this work, because this an issue people have known about this for a long time, or, well, in some sense of known at any rate. Itā€™s rediscovered at various times. So thanks for doing the work to actually make this change. + +RKG: My pleasure. + +USA: Next we have SYG on the queue. + +SYG: The change looks good to me. Thanks for doing the work. Just to reiterate for other implementers in the room, the compound assignment in V8 is a specific V8 bug. There is no proposal to align the spec with what V8 currently does. At some point, we will fix it. + +USA: Next we have DLM, who says that they support this, and then we have MM. + +MM: Yeah, I support -- you did keep qualifying everything with browser-based engines. If Moddable is in the room, I donā€™t know if you are, I would like to hear if this change works with your engine or if youā€™re already compatible with it. + +USA: Doesnā€™t sound like it. + +MM: Okay, so in any case, Iā€™ll let my +1 stand. Iā€™m in favor of this. + +USA: Thank you, MM. And that was it for the queue. With that,RKG, you had a number of statements of explicit support and nothing opposing this, so, yeah, you have consensus. + +### Speaker's Summary of Key Points + +- Timing of property key resolution for bracketed assignment will be updated to match V8/SM/JSC. +- Specifically, the type of [[ReferencedName]] in Referenced Records will be loosened so as to allow ToPropertyKey to happen late. + +### Conclusion + +- Normative PR has consensus to merge. + +## Intl.MessageFormat status update + +Presenter: Eemeli Aro (EAO) + +- [proposal](https://github.com/tc39/proposal-intl-messageformat) +- [issue](https://github.com/tc39/proposal-intl-messageformat/issues/49) +- no slides + +EAO: A lot of the context here is expressed in issue 49 in the Intl.MessageFormat proposal repo. This proposal is stuck. The brief version of this update is that at the last meeting, there was discussion of what we ended up with is a resolution for me to go to TG2 and get approval for removing the syntax parser, about which some concern was raised here. Then, without the syntax parser, I would be propose the proposal here, again, for Stage 2. + +EAO: However, the feedback I got in TG2 ended up effectively indicating that some members of that group, in particular the Google internationalization group, had concerns about removing the syntax parser. There is some discussion of the details in the issue, though Iā€™m not sure if theyā€™re that relevant to discuss here, as theyā€™re specifically internationalization concerns and concerns about what might happen if we were to go forward with the data model. + +EAO: The overall result of the concerns raised earlier here and the concerns raised in TG2 is that this proposal needs syntax parsing in order to proceed at all, and syntax parsing, on that, the conclusion that we reached is the indented bit here: ā€œTo standardize the syntax of a DSL, it would be meaningful/persuasive to see around a dozen organizations of various sizes, including ones which were not involved in MF2 development, make significant use in production of MF2 syntax across their stack (engaging application developers, translators, infrastructure developers, ā€¦). This will likely be required for Stage 2.7. It remains to be defined whether an intermediate, lower amount of experience would be sufficient for Stage 2.ā€ + +EAO: So, yeah, overall, this means that the whole of this is currently stuck, and it might proceed at some point. The general sense of the localization push that TG2 has been doing with this is that now all of a sudden the expected next action is not inside the JavaScript standardization bodies, but across the wider industry of localization and JavaScript localization in particular. And that in this weird way, TC39 or the TG2 part of it, kind of outsourced the development to the Unicode CLDR where itā€™s producing a result thatā€™s now entering tech preview, and now us as TG2 in particular, but TC39 widely, weā€™re not going to actually comment in any way whether MessageFormat 2.0 is any good, instead weā€™re going to wait to see if it gets industry adoption, and if it gets industry adoption, then itā€™s good and then we might consider it for further progress. + +EAO: Some of the discussion in this issue in particular happening earlier today is potentially interesting where SFC mentioned that he at least has interest to work together with others in TC39 to craft a statement of what we would support advancing and things that need to happen to get there, and that developing this verbiage would be fruitful. That could be useful for us to do here. I donā€™t know where else that could be done. My own presumption had been that the statement of support for this would be a stage advancement to Stage 2, but thatā€™s clearly not happening. So Iā€™m just saying this thing is stuck. Iā€™m interested in web localization, and clearly I need to go elsewhere than TC39 in order to do that. And once the web reality has adjusted itself to work around MessageFormat 2 or something else, because apparently weā€™re not sure about that one, then we might be able to standardize that in TC39. So thatā€™s what I had. It would be very happy if thereā€™s a queue. Iā€™m going to need to stop the share in order to see TCQ, but thatā€™s it for me. + +CDA: We have a few items in the queue. + +USA: Hi. EAO, thank you for all the work that you have done so far on this. Obviously you know, all the context that you gave explains exactly the situation. In my opinion, it was a great idea to do this in Unicode. What you mentioned it regarding entering tech preview, is if anything a great sort of result. Itā€™s MessageFormat 2, which we have to realize came out this core idea for this proposal, will end up improving localization all over, including in native software. Thatā€™s just great. However, the proposal, which kind of predates MessageFormat 2 in itself is also important. I think the fact that we have Stage 1 means that the core motivation of this, which is that message formatting as a utility in the JavaScript in devel tools is needed. Thereā€™s a lot of applications for this. So I think thatā€™s still very well motivated. Itā€™s a shame that the, you know, exact proposal that you designed couldnā€™t sort of proceed the same way. But I think there are a few ways we can go about it, including sort of breaking up the parsing as well as the formatting into two different parts. But, yeah, thereā€™s definitely directions that can be explored, and I think we should continue pursuing this Proposal. + +EAO: Thought Iā€™d mention that also related to this, one interesting fact is that roughly something like a third of web localization ends up ultimately depending on the intl-messageformat package, a polyfill for the 2013 version of this proposal. So thereā€™s a lot of past precedent of effectively this API or something very, very much like this API being actively used. But thatā€™s unfortunately besides the point here. + +DE: Yeah, EAO, youā€™ve -- I agree with USA. Youā€™ve done great work on this proposal over the years, along with many other people in this area and going and will Igalia and other companies. I also donā€™t share your sort of pessimism about the timeline. I donā€™t think several years is a reasonable requirement. Weā€™re working towards a certain level of experience, so within Bloomberg, weā€™re working on end-to-end prototypes of using MessageFormat now that the syntax is stabilized. Iā€™d encourage other companies to do the same, and Iā€™m trying to get in touch with other companies to attempt these prototype deployments. Also within the ICU tech preview process, like, maybe in six months or if it takes longer, in 12 or 18 months, this could be stabilized in the within ICU4C or ICU4J., which I think would give a strong signal to TC39 that this syntax is stable. I can understand the concerns about removing the syntax, that it would make JSON the kind of de facto syntax, which would be somewhat unfortunate. Maybe not fatal. And I think from here, we should continue to develop ecosystem experience and continue this in committee. Remember, this -- this proposal has been, as EAO noted, been a goal of this committee for a very long time. From before we had Intl. You know, just before Intl was active in the language, before most of us were here. And it is a common application need. And the lack of a common syntax for this is a major kind of divergence and friction for translators, so itā€™s -- and for application developers. But especially translators. Because they all have to learn how to do the particular translations. But the lack of good linguistic knowledge being included in that template string, because everyone invents their own little template formats that arenā€™t so great, is a real problem, and we can contribute to the solution by bringing this into JavaScript, as weā€™ve long been working on doing. So, yeah, letā€™s continue developing this, and I do think even it would make sent to -- for the committee at some point to recognize, yeah, we want to eventually do this. We have a first draft that looks stable. Thatā€™s kind of the definition of Stage 2. Even before weā€™re 100% confident in the current syntax, Stage 2.7 of course needs to wait until we have that very, very strong confidence. But for Stage 2, I think weā€™re more counting on do we really expect this to eventually be part of the language? Do we have a good first draft. Which I think we already go, but with more ecosystem experience, weā€™ll continue to reinforce. + +CDA: DLM + +DLM: Thank you. Yeah, I agree with sentiments that USA and DE have expressed. Iā€™m also optimistic about this proposal. I think thereā€™s general agreement in the committee that this is solving a real problem and a problem that needs to be solved. To me, I donā€™t really see it as stuck. To be stuck means that thereā€™s sort of irreconcilable differences in committee that would prevent it from advancing. I think this just needs some more time. And I would encourage people who are interested in this to continue to work on it. And, yes, I see no reason for this to be withdrawn or something like that. I think it remains a valid problem statement, and Iā€™d like to express recognition for all of the work that EAO has put into this so far. Thank you. + +SFC: Yeah, first, thanks, EAO, for the work that you have been putting into this. Iā€™m -- I think that this proposal is really important for the web platform. And you know, to clarify the position that you alluded to earlier, you know, what my team feels is just that the -- that the full scope of the work that the Unicode working group has been doing is really important to include in the proposal. And, you know, thatā€™s -- you can see the details on the issue of course, but, you know, we feel that the syntax is really a core part of the motivation of the Intl MessageFormat proposal. And I think that thereā€™s going to be room in ways for us to look at avenues for advancing this. I think, you know, even in TC39, thereā€™s still room. And I think there is room for us to figure out what would be the parameters for Stage 2 advancement of the proposal if it includes syntax. And I think that can be done within 2024. And, you know, Iā€™m definitely optimistic that that is a possible avenue. You know, as I also alluded to in the other -- in the TG2 call, like, you know, there could be avenues elsewhere within the web platform that might be also appropriate. I think that those could also be pursued. So, like I definitely think that thereā€™s room here. I think, you know, with time and, you know, time on the scale of quarters, not years, I think thereā€™s definitely room for us to take it to a meaningful stage. And, you know, Iā€™m -- I certainly hopeful that, you know, that your energy and the energy of others will continue to be invested in this proposal, because I share the optimism that DLM, USA and DE just expressed as well. + +USA: Just a side note. I completely agree with what you said. Thereā€™s certainly ways to address both the concerns that have been raised while also keeping the syntax. I think within the internationalization groups, we have this strong understanding of, you know, the syntax being something that was developed out of the, you know, all the stakeholders coming together and working hard on figuring out what is the best for this space. So, yeah, I think in the next few months, we should spend some time to figure out a solution that works for everyone and can achieve, you know, the goal of adding MessageFormat with the syntax, but done in a way that agrees with the rest of the delegates. + +JHD: I wanted to echo all of the optimisms that have been expressed, and I mean, I completely empathize with the frustration of having a proposal that feels stuck. But I think this happens to most proposals, especially at Stage 1, and this is just the normal way proposals work, you run into problems, and you have to resolve them, and sometimes itā€™s not clear-cut or unambiguous, how to do that. And you know, proposals sit at Stage 1 for a very long time. Promise.try got 2.7 this meeting and 2 last meeting and got 1 nine years earlier, and not all proposals at Stage 1 will advance either. So I hope that you feel motivated to continue to do it, and I want to see this advance, and I -- you know, I just wanted to empathize that this sucks for all of us and this is normal. + +EAO: Yeah. So thank you for everything that everyone said here. A meta question Iā€™ve been struggling with respect to this is from how I look at this stacking of MessageFormat stuff, thereā€™s the MessageFormat 2 message syntax has been defined in Unicode, and then thereā€™s the JavaScript API that we are defining here only in TC39. And, well, the JavaScript API of this has been effectively the same since 2013, and about, as I mentioned, one-third of web localization currently ends up using an API that looks really an awful lot like the one thatā€™s being still currently processed except that is working with ICU MessageFormat 1 instead of MessageFormat 2 as being proposed here. So this is confusing to me as a delegate to get this sort of mixed messaging of us needing to have all parts of this vertical stack -- including the MessageFormat 2 syntax -- being effectively finalized before any advancement can be considered. Especially as even during this meeting, we are considering -- we had the Records & Tuples discussion yesterday where clearly we have a Stage 2 proposal where weā€™re kind of reimagining what that could be. And weā€™ve got extractors and pattern matching coming up, and there, if I look at the progress and discussions ongoing this week in those repositories, thereā€™s a lot going on and could still change with these syntaxes that we are proposing. But specifically, then, those we can maybe consider for Stage 2? But then Intl.MessageFormat is somehow completely out of the picture. So the weird thing here to me is that I donā€™t think in this whole equation we as TC39 are expressing any opinion on MessageFormat 2, the syntax. Weā€™re saying that it needs to be adopted by the industry, and if the industry adopts it, then weā€™re good with that. Whereas, this work was originally started because we asked for it, effectively. So I guess weā€™re okay with that, but it does mean that as my interest going forward is to take this discussion to various actors in the industry, localization library developers, others, I donā€™t know how those discussions are going to go. And whether those parties are going to be following exactly the Unicode MessageFormat 2 syntax or whether these very wide range of actors are going to feel like actually we could, you know, do a slightly different thing here, and that might be better for us. This is going to be interesting to see, and I very much welcome hearing next from USA as I really would like some help with this, because Iā€™m not going to be doing much with this proposal for a while, because I donā€™t think thereā€™s much to be done here. The work needs to be done elsewhere. + +USA: Yeah. I completely understand what you just said, EAO. I mean, itā€™s a shame that all of that happened. I do feel like Stage 2 is probably the wrong milestone as well, to sort of being fixated on this particular issue, because you know, it kind of doesnā€™t go around the understanding of Stage 2, at least for me. In the case that thereā€™s actually very little -- sorry, at the specification level that -- yeah, anyway, however, I do feel, as I said, that there is ways out, that we can diplomatically resolve this, and I think that, you know, Iā€™d be happy to take on this responsibility if you are okay with it. So, yeah, Iā€™d like to take on this proposal if you step down as champion. + +EAO: Would you also be willing to join me as a fellow champion, if I donā€™t step down? + +USA: Absolutely. That would be my preferred choice. + +EAO: Letā€™s talk about this offline. + +USA: Yes. + +DE: So Iā€™m glad that USA is volunteering, and if EAO stays involved in this, then thatā€™s great. But also, we have to consider kind of the psychic effects that the committee dynamics have on TC39 members. Weā€™re all working hard together to make technical progress on this project that we share, and we all care a lot about. And it can be really difficult within that to kind of bring that together with coordination with other concerns and signals that feel somewhat negative. Iā€™ve definitely felt a similar way to EAO in the past, sort of burnt out about these very contradictory signals from different committee members, where Iā€™m supposed to somehow navigate all of that as the proposal champion. Of course, none of this means that we should reduce the rigor that weā€™re applying when figure out what should go into JavaScript. But this a repeated pattern in committee, and we have to figure out what we can do about our dynamics to reduce this negative effects on our -- on our colleagues. + +SFC: Yeah, Iā€™ll just also put on record that we discussed this a little bit, too, and, like, some -- either myself or someone else from my team may also be happy to join as a co-champion later in the year. Right now weā€™re putting our efforts into the Unicode side of the development. But once that wraps up, putting the energy into the -- you know, JavaScript/web platform side is definitely something we think is going to be important to do. + +CDA: There is nothing further on the queue. Okay, would you like to dictate a summary for the notes at this time or you can also do that asynchronously, but it would be appreciated either way. + +EAO: The Intl.message format proposal has reached a bit of an impasse, and next steps here will need to include industry signals of adoption of MessageFormat 2. This may take a while. USA is willing to step up as a champion or co-champion of the proposal. + +DE: Yeah, I disagree with the conclusion saying that the proposal reached an impasse. I think we heard basically all the commenters saying that this proposal is good to continue developing, that we have concrete steps to continue developing it. Itā€™s okay if Eemeli wants to take a break from championing it, but I think itā€™s kind a disservice to the work that went into this proposal to say that itā€™s at an impasse, when we have actually a ton of work ahead of us that we are doing now already and be continuing to do. Itā€™s just not a statement of fact to say that itā€™s at an impasse. + +EAO: Sure. + +DE: So the conclusion should be something that weā€™re adopting as a committee. Itā€™s not only the thing that the champion says. + +CDA: Right. + +SFC: Yeah, I agree with what Dan said, and as well as the other half of that statement. I feel that thereā€™s room to advance this proposal at least to Stage 2. You know, even given everything thatā€™s already been said at the committee. You know, and then in terms of, you know, Stage 2.7, like, I think, you know, thatā€™s definitely something we have to figure out exactly the parameters, but, you know, I think the proposal still has room to advance, even within TC39, without I think your statement of, you know, large, widespread industry adoption. I think that thatā€™s -- the statement that you presented earlier said that there were some delegates who said thatā€™s a requirement for Stage 2.7, which is likely the case. But I think for Stage 2, thereā€™s even room within this committee. + +CDA: DLM + +DLM: I agree with SFC and DE. I donā€™t think we should say thereā€™s an impasse. I think we should recognize that this going to take more time. + +CDA: DE, we have like two minutes. + +DE: Okay, Iā€™ll just -- just about the statement. Widespread industry adoption, that was not the requirement that we agreed on last time. We agreed on, you know, widespread, some experience, but not that any -- not that every major enterprise already adopted it for all of its internationalization. We went through some iteration last meeting on making it not a time-based thing and not an itā€™s completely used by everyone in the world already. And we shouldnā€™t, you know, enforce a stronger requirement in our conclusion for today. + +SFC: Yeah, I agree with that, and I think itā€™s important that we choose the right words. I donā€™t know exactly what those correct words are. Iā€™ve heard a few different versions of them stated today regarding the industry validation, adoption, however we like to say it, so I think it would be a good use -- a good thing to do at some point to get exactly what the right words are and exactly what those words mean. + +CDA: Okay. I put myself next on the queue, which is just I think that this particular part of the conversation underscores the need for making sure thereā€™s clarity of understanding from everyone involved, everyoneā€™s on the same page about the current state of affairs, as well as next steps. So maybe Eemeli along with the co-champions and others interested can maybe get the conclusion for the notes as a first step to be something we can all agree on. + +DE: One last note. Iā€™d like the conclusion to include encouragement to get in touch with the proposal champions with people working on message format, to do prototyping at -- in your application and your organization, because weā€™re really looking for people who can do this work with us, that could help work out any last kinks. Thanks. + +EAO: Just thought Iā€™d mention that if you want to play around with this, the package messageformat@next on npm is a fully functioning polyfill for this, and it also includes a whole bunch of utilities for doing stuff with the format. + +### Speaker's Summary of Key Points + +- The speaker listed out the different roadblocks in terms of realizing the initial goals of the proposal. While there was some hesitation for the inclusion of the newly designed syntax, there were concerns against shipping the proposal without a standard syntax. +- Many committee members expressed support for the goals of the proposal again, reasserting that they are in favor of it. +- Additional helpful context was shared and the presented ended with enthusiasm regarding the future of this proposal. + +### Conclusion + +- The proposal has hit some blockers during its development. +- There is, however, still a great deal of positive sentiment in the committee regarding the proposalā€™s stated goals (adding a message formatting mechanism to ECMAScript). +- USA joined the proposal as co-champion in order to assist EAO. +- There will be explorations by the champions regarding the possibility of exposing MF2 elsewhere on the web. +- The champions will also work on improving adoption and gain experience for the syntax. + +## Discard Bindings for Stage 2 + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-discard-binding) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkqpoWw0poDW3dawBQg?e=O3UP1c) + +RBN: So good morning. For the next few minutes, Iā€™ll be talking about discard bindings proposal based on where we stand with the proposal right now and the potential for where this is going to go in the future. So to start, just a brief overview. The concept of a ā€œdiscardā€: itā€™s a ā€œnon-named placeholder for a variable bindingā€. It allows you to align unnecessary variable names in various contexts such as using and wait using declarations, function/method parameters, array structure patterns, extractor patterns and pattern matching. We currently are proposing the use of the void keyword in place of a binding identifier, and the idea of discards in general has prior art across numerous languages, C C++, C#, Python, rust, and Go, C and C++ allow you to align parameter names, for example, C#, Python, rust and go all use the underscore and now added to that list is java is adopting underscore with the mechanism which it uses for establishing discards. + +RBN: So, the motivations of the proposal where things stand, one of the reasons we want to have discards is that there are often a need for side effects from declarations without the actual variable binding. So, again, parameters that you might not want to necessarily name, that youā€™re not going to use when youā€™re overloading a method or sort of overriding a method or passing a function as callback. Variable bindings that you need for their effect, reading a property or skipping a value in an array structuring or in the case of using declarations, a binding that establishes a resource that exists within this lifetime a bound to a block scope, but you donā€™t intend to use the resource value after the fact. This is the case for things like locks in a shared memory environment. But also could be used for things like log and tracing patterns in a function, et cetera. + +RBN: There are some existing single-purpose solutions that already exist within JavaScript, so elision, in array structuring and array patterns, bindingless catch, but thereā€™s no general-purpose solution that can work in other location. Empty object patterns just like currently curly (?) for the declaration pattern are insufficient because they throw null and undefined, where you might want to have null and undefined value and specifically in `using` that is a specific capability you can have a null or undefined resource to indicate a resource that is conditionally bound. In addition, `using` declarations do not allow binding patterns for various reasons that weā€™ve discussed in prior plenary sessions. So curly curly is not a viable option. And simple vision not efficient for many cases. Using declarations, thatā€™s already valid syntax. Maybe itā€™s a potential option for object literal or object destructuring peat earns, but it can look confusing and possibly be conflated with shorthand assignments or shorthand binding patterns, so perhaps itā€™s not the best solution the consider. + +RBN: So this is why weā€™re look for something thatā€™s an explicit syntax to indicate a discard. So thereā€™s some updates to this proposal that we've been discussing. I think most of these represented in the February plenary as well, but weā€™ve been further discussing this. So discards would be allowed in var and not at the top level. So it is particularly useful to be able to use discards in, say, an object destructuring pattern because you can skip over properties you donā€™t want and, sorry, this doesnā€™t show this, but then spread the rest into another object, so you can actually pick out properties that you do not want to carry forward in that object, which is very valuable. But on in the case of something like constant void equals object, itā€™s not really all that useful to do this. You could have just executed whether the operation occurred beforehand. One place where that is again useful is something using declarations that has some type of tied behavior it to. So discards in array literal and, like, expressions are not supported. Weā€™ve decided to leave it as -- leave it alone so you would still use illusion for those cases and thereā€™s a clarifying question from SYG. Yes, top level in this case means outside of a pattern. So you would not be able to use it in place of a binding identifier thatā€™s at the top level of var letter const and used as a bind more an array pattern. Itā€™s not allowed at the top level, but only with -- somewhere within a binding or assignment pattern + +RBN: Weā€™ve also discarded other less-motivated locations for discard such as catch bindings, we already have bindingless catch. Import and export, because you can just ally those imports and exports. Thereā€™s generally no semantic meaning behavior behind not using the imports if youā€™re not referencing them. Thereā€™s no getters that trigger, no side effects you need to care about there. Class and function names arenā€™t necessary because you can align those names with their expressions or default exports. Thereā€™s no real need to insert anything there, and field names donā€™t make sense because you have constructors and static blocks, so you have places to evaluate things that do not need a binding in those cases. + +RBN: So brief summary of the current state. For use and await using, we want to be able to say using void equals or await using void equals as a way to register the resource that is being initialized without actually introducing a binding. This was a capability that was introduced and was part of the proposal in various forms since its inception and was removed to reach more of an MVP version of the proposal and always been considered a high priority capability to add back in. We postpone that buzz we want to look at the broader capability of discards across the Lang. In addition to that, we talked about binding patterns, being able to exclude properties you donā€™t want in a spread or rest assignment to another object as a use case. + +Another use case is explicitly labelling things that illusions as trailing illusions especially you have the issue where you could be possibly confusing the number of commas you might see, just also currently supports them as well many assignment patterns. Parameters, so void, A to ignore a parameter in the leading parameters in a callback, in extractors to be able to skip over leading destructuring elements in the pattern, and in general, pattern matching also needs a mechanism to have an irrefutable discard, so you can say things like ā€œX must exist, but I donā€™t care about its valueā€. And, again, in pattern matching, most languages that have pattern matching have some mechanism of discards that does this as well. + +RBN: So this is also that weā€™ve talked about before, one thing that weā€™ve been recently talking about in Matrix over the past two months or so since the February plenary has been the potential for kind of aligning with pretty much every other language that uses -- that has some mechanism of discards to use underscore, and weā€™ve been dis(k) a way that that could possibly work that isnā€™t -- that in most cases would not be a conflict with the concerns around repurposing and identify more of these cases. So for one, again, as I mentioned before, java recently added discards that use underscore, the number of languages that use this is growing, and having a consistency with this helps when translating from one language to another certain capability that are equivalent, therefore, we can -- if someone is writing C# and writing Java in JavaScript, or skulls (?) in JavaScript, etc, that they can take some of that knowledge with them as they go from language to language. And there is consistency when it comes to documentation. Some places that we can already use underscore repeatedly include in variable declarations, variables have no requirement that a declaration of that name doesnā€™t already exist, unless itā€™s also a lexical declaration. Weā€™ll get to those in a moment. And also function argument lists or function parameters in non-strict mode are allowed to be duplicated. Where it is not currently legal is when you introduce lexical declarations or strict mode parameters. Currently today, if you were to write a function f3() and had a very and (?) some block scoped or lexical declaration that both has underscore, that would be a syntactic error. That we would be proposing if we wanted to move to underscore would be using an underscore in multiple block level declarations in the same lexical scope would result in a reference to that in the -- in that scope or any nested scope to be a syntax error. Now, if you had another block that contained its own swing declaration for underscore as a lexical declaration or a variable declaration, so a function declaration scope for variables or any block scope centre the lex cam declaration, could potentially redeclare underscore as having a specific meaning and get out of that syntax error case. But this basically would take something that is already a syntax error today and make it legal in the case where you donā€™t reference the underscore. And then it would still be a syntax error if you do reference the underscore, because we donā€™t want you to reference this value. Itā€™s not intended to be used. We wouldnā€™t break the case of the var underscore A, comma B like the example on the left, but we could support cases that currently have a syntax error today. + +RBN: The same could be for function F34, for example, where I could have multiple declaration are lexical declaration using underscore and using underscore would be a syntax error. As well, repeating underscore in strict functions would have the same mechanism where within that scope, unless redeclared there a nested scope, underscore would then be considered illegal. There still is the potential this is considered to be a problem if someone is importing, say, underscore lodash as the name space underscore. The case that weā€™re -- that weā€™re presenting here is that were you to use underscore as a discard, that would already be a syntax error in these cases. So you would not be able to use underscore as a discard anyways. You can continue to use _foo, et cetera, to -- for discards when you need to use the underscore variable without breaking in those case. There would be no code that runs today, that is Lyle today that would have a change to that effect were we to add these capabilities. It would only be if you introduced a new underscore variable or a second underscore declaration inside of that block that would affect how underscore works elsewhere. And, again, that is -- would be an intentional use of a discard, so youā€™re essentially opting into the discard behavior when you do that. So that has some benefits. + +RBN: One thing is that is not -- would not be viable if we are to go with this is you would not be able to use underscore in an assignment pattern. That already has meaningful semantics as itā€™s an assignment. We would not have discard in assignment patterns, with you that is actually something I consider to be a possibly acceptable concern, that the most important case is declaration bindings, since those canā€™t be repeated, so thatā€™s what weā€™re mostly trying to focus on. Not having them in assignment patterns is unfortunate, but not terrible. You can declare any variable you want to be the -- to be the discard in those cases, and write an assignment that overwrites that value repeatedly if thatā€™s something you needed. + +Iā€™d like to open the queue to discuss. I will also say that there is some concern about using void as a keyword or pretty much any keyword in some of the grammars. This was brought up by WH on the discards issue tracker over the past 24 hours or so. That using void introduce some complexities with cover grammars that underscore would not have because itā€™s just an identifier. So all all the places you would want to use it syntactically are valid cases and thereā€™s very little spec text that needs to be written for the gram for this work. All of this would be how we handle the AOs for how bindings are established and initialized. + +USA: We can go to the queue. + +JHD: I've mentioned this for other proposals. For this proposal, Tennent's Correspondence Principle used to come up in committee a lot during ES6ā€™s design, so frequently that it appeared on TC39 Bingo. Essentially, my understanding is that when you wrap code in a structure, it shouldnā€™t change its meaning. I think, strictly interpreted, JavaScript violates it in a number of places, but that doesnā€™t mean itā€™s still not a good language design principle to follow. The fact that no current code will change its semantic meaning is good, but thatā€™s sort of a precondition, because if existing code will change, we canā€™t make the change. But if you refactor code, or if you have code in a function and then you suddenly alter its signature to use this functionality, your code could break. And that to me is just not going to be worth it for any identifier. I think itā€™s unfortunate so many languages for which it is an identifier are choosing to use an identifier. But just because other people are making a mistake doesnā€™t mean we have to follow the lemmings over the cliff. + +RBN: There is also the possibility that we could expand it to be any identifier repeated in a lexical declaration. If thereā€™s a conflict with the identifier to use double underscore 2 or 3 times in lexical declarations to have the same effect it doesnā€™t have to be underscore. It has the valueOf being simple. It aligns with many places in the ecosystem. So itā€™s ā€“ itā€™s fairly well established that that is what most languages are used for this. So itā€™s fairly recognizable in those cases. Yes. Itā€™s been said that it doesnā€™t really ā€“ break legal code. So ā€“ + +JHD: Sure. We wouldn't be discussing if it did + +RBN: Yes. It does have the caveat that if you were using underscore as reference to something and you declared underscore as something else, it would break. I am using underscore as a block. In that block. So I am not ā€“ itā€™s not breaking referring to something on the outside. If I do that just once, I am breaking something on the outside. So youā€™re already making an intentional change to use underscore as a discard when you do this. And it suffers the same issue that using any variable name in any declaration would have with shadowing and scoping that you have with any variable. It doesnā€™t break any of the semantic and variance we already have to use as a discard and itā€™s especially more valuable when you expand to any identifiers, thatā€™s duplicated in a lexical declaration + +JHD: The idea of allowing flexibility with the selection of the discarded identifier is interesting. But like I would expect a leading underscore even if thereā€™s something after it. TypeScripts treats if you put an underscore end, I believe? + +RBN: If you have the appropriate flag TypeScript to error object unused declaration it error on anything that does not have a leading underscore, I declare this, but not using it is the de facto standard. But that's TypeScript spec. You have to plaster ES lint, onto ā€“ + +JHD: Right + +RBN: Declare the unused variables unless you can ā€“ override the default for that say underscore is allowed or have the rule set to allow underscore things only. So thereā€™s a general practice to use underscore named things in most languages. But it introduces unnecessary bindings. It requires like the cognitive overhead to use to make sure it wonā€™t conflict with something else. Using underscore or void just kind of takes most of that complexity out of the equation and makes it easier to keep writing code, as often said, 3 things ā€“ hard things to do in programming and one of them is naming things. + +MM: Okay. Iā€™m sorry. If youā€™re still talking, sorry. I didnā€™t mean to interrupt + +MM: So I am strongly in favor of void. I am strongly against underbar, although the cover grammar issue you said WH said is new to me. I am glad to see WH on the queue. Please explain that. And the significance of it. But, these underbar rules are sufficiently complicated with enough edges that I just think that ā€“ even if we decide to ā€“ for whatever reason that we canā€™t have void, I would still say no to underbar even though I want the feature because these scoping rules around underbar are too complicated and itā€™s easy enough to use the existing linting support for underbar prefix parameters. So thatā€™s it. + +RBN: So I will say it again. The linting support for underbar prefix parameters doesnā€™t mean to I ā€“ I am not concerned about the reference and saying underscore equal A, using underscore equal B that is syntax error. That raises an error. Because I canā€™t have duplications. You canā€™t ā€“ itā€™s already illegal to do that. We are doing the same complicated things that we would be doing that makes this illegal. + +MM: I understand. I still am strongly against underscore. I think itā€™s too complicated. + +JRL: I am the opposite of MM, I guess. I am absolutely against any sort of ASCII keyword. Anything not underbar, underscore. The point about discard binding is to mark that I know I am not using this value. Itā€™s purposeful, but it doesnā€™t have any sort of weight when you are scanning this. It should look slightly different, but disappear. Underscore is the only thing that does it well. Maybe another ASCII punctuator could do this, but nothing that has the same precedent as underscore. The complexity around scoping is just because we have chosen arbitrary rules here in this presentation. We can make it just so you can redeclare underscore and nothing else changes about anything else in the syntax. It seems such a better solution to me. + +RBN: Yeah. I will say that using an ASCII punctuator is problematic. For one, we are short on ASCII punctuators we can use. When possible to use keywords be they existing reserve keywords to reserve the limited set of ASCII punctuators we have for this case. And the other problem is that most of the ASCII punctuators consider using other uses that make this not viable. You could not use a plus or - or asterisk because it turns into a compound assignment and that doesnā€™t work. You are left with a small subset of symbols that have other potential uses that are more idiomatic (?) of what a discard ā€“ other operations versus a discard. In my opinion, the two things that mean discard are either underscore because thatā€™s the accepted way of representing a discard in most of the languages. Or void because that has the same semantic meaning of discarding something, discard a value that is a result. Such as in void expression. Void and underscore work. It doesn't, it indicates something more like decorators might be employed. Hash, thatā€™s generally used for things like private names. So it really doesnā€™t feel like those are valuable use cases. So it becomes more complicated to pick those. I think having some ASCII word is the best approach, whether that is underscore allowing any variable that is [dup]ly (?) indicated more than ones to be legal rather than illegal. Or using something like void. + +USA: Moving on, there is a clarifying question with DRR, but before we do, I would like to remind you that we have a huge queue. And like 6 minutes. + +DRR: I wanted to say the last point, because MM what are the complexity concerns you have? Are they just because of some of the rules that have been mentioned here? Could they be fixed in some way or addressed? + +MM: Itā€™s the rules that are mentioned here and the fact that ASCII is already a legal variable name, it means that I donā€™t think that thereā€™s any way forward with underbar that doesnā€™t create complexities in the wrong place. `void`, I wish that we would have been able ā€“ adopted underbar early in the days of JavaScript for this purpose in which case thereā€™s no conflict. But we didnā€™t. And I think starting from where we are, itā€™s not worth the complexity. The ā€“ I share some of JRLā€™s aesthetics, which looks like an ASCII identifier void, as a key word itā€™s often seen, people who arenā€™t familiar with the language might look at it and not know itā€™s a keyword. When reading code. But even then, if it looks like a ā€“ if they mistake it for a variable, it certainly will be the case but could never appear in the position of a use occurrence of a variable. So it does guarantee that even if you thought it was a variable, there will be no use occurrence. So thatā€™s not terribly strong for the confusion issue. But I find void just very pleasing. + +DRR: Okay. I am good at speaking now. + +WH: So agree with JRL for the same reasons that he stated. I would also like to present a more serious issue, in that the currently proposed grammar breaks existing programs by introducing a version of `void` which doesnā€™t take a parameter as a cover grammar. I found some consequences of that. The first one I found was `await void x`, which has an existing behavior which was broken. That was fixed. But there are more serious ones like `void` followed by a `\` which now can get interpreted as either a division or as a regular expression. `void + x` gets interpreted as an addition of `void` and `x`. So there are significant problems with the way that the syntax using `void` is structured. There may be solutions to this but they would require rethinking how the whole thing is done. And I would rather use something like underscore instead. + +RBN: So also, speak to that point, there are two things that I am considered here: one is that I think there is definitely a ā€“ I believe itā€™s feasible to have void in the way that itā€™s been defined with differences in cover grammar, to make sure that the spec and behavior matching the intentions. Things restricted can be restricted for the optional case. Things we can look at this. The other thing too, as I was saying with underscore, we lose assignment patterns. All of the complexities come from the assignment pattern case as far as I am aware. So if we end up dropping assignment patterns, all of those concerns I think disappear. You can correct me if I am wrong. I donā€™t think the concerns come up anywhere in binding patterns or parameter names. + +WH: Okay. Thatā€™s a bit too abstract for me to give you an informed answer. My point is, I think we agree that the current grammar breaks existing things. We agree that it is unintentional and I think that should be fixed before we go to Stage 2. + +RBN: You say we should address this before we go to Stage 2. Looking at the Stage 2 requirements, does this not seem like it might be an editorial issue to address or more like a complete inability for this proposal to ever result that needs changing? + +WH: Thatā€™s for issues in which itā€™s clear where the bug is and what the fix might be. Fixing this requires significant grammar restructuring. These are not simple editorial changes. + +WH: I am uncomfortable with skipping the review before going to Stage 2 without knowing what the solution the editors come up with will be. The whole point of advancing to a stage is that people other than editors can review the spec. + +MM: So I am glad to have heard that from Waldemar. I now understand what the problem is for void, and let me say, I am not going to ā€“ I am not going to close off in realtime the underbar. This was actually the first I have heard of underbar being raised as a realistic proposal. I am open to continuing discussing this and probably a set of underbar rules I would accept. I donā€™t know. + +USA: All right. There are a couple of other topics, Ron. On the queue, but youā€™re on time. What would you like to do? + +RBN: Okay. This underscore question is something we need to continue discussing. I appreciate it if folks can file feedback and continue discussion on GitHub. I plan to ask for Stage 2, but the one thing I would consider to be a Stage 2 blocker would have been if we needed the void to underscore which requires heavy spec writing. We have done something like that in stage 2 before, but itā€™s viable to consider. I was looking for Stage 2. There was heavy feedback that went to Stage 2 in the last meeting. So it was an indicator itā€™s worth the possible pursuit of this. But I definitely can hold off on Stage 2 as we continue having this discussion. + +USA: Would you like us to capture the queue? + +RBN: Probably useful. I donā€™t know if we have a chance to discuss this, but posted somewhere to refer to, in other discussions, I think that would be valuable. + +USA: Sure. Letā€™s capture the queue and we can come back to this later. Maybe, if we have time in this plenary, or later, if you so prefer. + +RBN: I will point out this slide (last slide) if you need reference. This is the link for the proposal repository and the current spec text as part of those discussions. + +USA: Okay. Great. Thank you, RBN. + +### Speaker's Summary of Key Points + +- Proposal specification text written for `void` syntax +- Some interest in pursuing `_` over `void` +- WH noted an issue with the cover grammars for `void` and `await using` that must be addressed + +### Conclusion + +- Did not advance due to cover grammar concerns. +- Further discussion necessary regarding `_` vs `void`. + +## Extractors for Stage 2 + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-extractors) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkqpinLRBZZwud0rM9w?e=s7hKoI) + +RBN: So again, good morning, talking about extractors. Potential for Stage 2 + +So the motivation for extractors is that currently, I have had some questions and comments about changing the motivating statement here. It says thereā€™s no mechanism for executing user handler logic. You can execute some if you have a getter, for example. That will run user code or if you have a computer property name, evaluate user code there. Those donā€™t affect the structuring process. Itā€™s a mechanism to have user-defined code have an effect on the process itself. Rather than side effects that occur in the structuring. + +RBN: The pattern-matching process, user-defined logic while matching. This exists both in the pattern matching world and in the destructuring world. There are similarities with this proposal to other languages. Scala has extractor objects which is when I proposed this, I started with, we had a tour of Scalaā€™s support for this capability. (?) has some syntax and maybe not all the same mechanisms, which in a potential future enum proposal, the idea is to align some of the design aligns with the Rust-style syntax but using justifies it depends on something like extractors to have the same type of behavior. F# has something similar. C# has type testing and deconstruction. As one ā€“ (?) and destructuring user-defined logic. OCaml and Swift. + +RBN: This is designed to look as the opposite to function application or construction. So where function application construction takes a series of arguments to produce a single result, extractor a single subject, and produces a series of results. Dual or opposite behavior in these cases. + +RBN: One of the capabilities of extractors is that they allow you to have dotted references starting with some reference to an identifier that is in scope, identifier reference. That is then evaluated, converted or checked to that is an object that has a symbol method. And that is what is evaluated to produce a result that is either in the case of a destructuring extractor, either array or iterator. Or it is a false eval indicating something failed or an incapable matching case (?) thereā€™s additional opposites considered ā€“ that are discussed in the pattern matching proposal. Things like nested destructuring. So you can use an extractor who has maybe a single argument or multiple arguments to further deconstructed in other values. And it can itself be nested within another pattern so that you can pull things apart and do data transformations and validations within the destructuring pattern, which is the key bit that is not really feasible to do for binding patterns and parameters today in a consistent way. + +So again this also shows examples of using these within parameters. + +RBN: So: brief history. September 2022, this was proposed for option at Stage 1 and included array extractors and object extractors. Array extractors used the parenthesized syntax, because the bracket wouldnā€™t work in the assignment pattern type case. So we used parentheses to ? big wait. (?) This used a curly brace, and is more of an object pattern approach within it. We only saw advancement at that time for extractors in bind patterns. The reason for that is the other side is being pursued as part of the pattern matching. These proposals are linked as far as cross-cutting concerns. All the design considerations for pattern matching apply for extractors but since pattern matching doesnā€™t talk about the deconstructing, binding assignment patterns, itā€™s not a good fit for that proposal itself. + +RBN: It was felt better to have this as a separate or on its own proposal. In 2023 there were a number of discussions with the champions, we agreed to incorporate array extractors to pattern matching proposal. Object extractors were removed. To the possibility of them eating up too much syntax space. No line break curly type things that we might need to extend the language in other places, and in February 2024 we provided an update on the proposal that showed the extractors, object extractors portion was officially dropped. And we discussed the ā€“ that further consideration needs to be made with iterator extractor performance. The historyā€¦ We want this part of the restructuring process. Transformation and normalization during restructuring which isnā€™t possible in JavaScript without very opaque, complex syntax that only works inside an assignment pattern. Or requires moving things into or breaking things like into multiple statements within a FunctionBody which kind of defeats the purpose of something that is concise and clean. The idea being to make these things very small and easy to use both in pattern matching case and have the same capability also in the destructuring case. + +RBN: As a result, we chose to leverage the design that is employed by the extractor and rest variable patterns as prior art and guiding direction for how to build its implementation. So this is again based on this and has a cross-cutting concern on how those features are handled. + +RBN: This is designed to provide parity. One of the key requirements in the pattern matching side was that for extractors to be a value addition, they must exist on the destructuring side. Itā€™s essentially required that they exist. I donā€™t know how flexible it is, but thereā€™s value in them existing in both places. That the destructuring side is more the irrefutable match. It must be this thing, versus pattern matching, where thereā€™s a branching condition, whether something matches or doesnā€™t. In the destructuring case, they must exist, have a value. Like a regular just point to something with the symbol iterator. These must exist when those things run. + +RBN: Another capability or benefit of extractors is they provide the bases for potential future with enums. We discussed an enum proposal that was not adopted. I plan to bring this back because I have additional discussions and deeper into the benefits that can be gained over how engines normally interpret regular JSON objects or regular JS objects. More expressions happening before they bring it back to committee. Itā€™s an interesting direction that we could take. Enums and possibly a future proposal. Some are looking at this and a future what to do if we had that capability. Even without this, itā€™s possible to indicate some of the semantics without the benefits of using regular classes. + +RBN: So those benefits exist even without that. As far as the proposal, itā€™s two parts. Extractor member members have a dotted name. This is now more aligned with how custom matchers work. You can reference some use element access as part of it. Reference MetaProperties like new.target or import.meta. The idea is the left side of the extractor is someIdentifier reference or MemberExpression that evaluates to a thing that we can then use to indicate this is the ā€“ how the customizations will be applied to the process. + +The extractor MemberExpression is again used to reference the matcher based on whatever the scoping rules are, if you are using, this referencing this, super, et cetera. + +RBN: In binding patterns extractor performs an array on a successful match and denoted by parentheses. This is more the ā€“ itā€™s better explained in the assignment pattern discussion which comes next. But some examples include being able to have a parser that can parse input and three outputs. This is the example that would come out of, say, Scala. The Scala tour that shows how extractors works essentially has this example in it. Itā€™s designed to parallel destructuring patterns where they are placed, how they can be nested, and theyā€™re designed to mirror the application side of things. So array destructuring pattern = array has a similar look and feel. As a result, the syntax for extractors has a similar feel to the application side of things. You can say, list AB and a list of 12 or a new list of 12, `option.sum` is an extractor of value from `option.sum` of 1, would pull valueOf 1 out and put it into the variable value. The dual syntax, so itā€™s recognizable. Itā€™s not some arbitrary different syntax you have to learn. And the feel as a result is idiomatic within the JavaScript syntactically. + +RBN: In assignment patterns, we have discussed how a ā€“ if to use a bracket, instead of parentheses in the `option.sum` on the bottom here that would actually be legal JavaScript and considered an access expression. An assignment is not what we want. We have to get rid of the syntax using parentheses. And that matches the duality versus the extraction side. As far as custom matchers, the actual example culled from Scalaā€™s code tour. It shows how in Scala this is an unapplied method in the case of the pattern matching proposal this is a single custom matcher. Itā€™s doing the same thing. Here I am executing this result and pulling the value which matches the first element in the result. Null is a value. If it doesnā€™t match it will fail. The array itself can be used to pick apart the elements that would be part of this. So you can pull out a custom ID. To extract the value. + +RBN: A custom matcher receives 3 arguments. The first argument is consistent across both pattern matching and extractors. Itā€™s a single input value. They are essentially a unary function, but have additional functions passed in. The second argument is hint. Pattern matching can say, for example, X is point. And that use the custom matcher. In this case, syntax is not doing further destruct, I donā€™t need a result. I can use the hint in that case, Boolean that says, I donā€™t need an array result. I can skip the extra work that canā€™t be used in the binding assignment patterns. You canā€™t const ID = whatever because thatā€™s legal JavaScript + +RBN: There is the possibility for the match to be refutable. If it doesnā€™t work, it moves on to the next item in the list. In the case of a list pattern, the restructuring can only happen if it succeeds. This case, I must destructure. Since I must always do this, it's a list. + +RBN: The third option, receiver, is one that is still kind of up for debate in the champions group. I donā€™t particularly agree with its conclusion, but included for the sake of completeness. With the receiver is that if ā€“ I have a separate slide for that. + +RBN: The rationale for the receiver argument is that how JavaScript normally works is that once I have something like ā€“ an example of the use of this. I donā€™t. If you were to say something like, I will go to ID extractor. You said itā€™s a foo.ID (?) extractor. When we get to the point of calling custom matcher this value is whatever foo.ID extractor was. This valueOf foo that you might have seen if you consider this to be a function or considered it as a function, would be lost. Instead, the receiver is something that gets passed by taking the receiver it would have had, foot.ID extractor says, in case you need to reference something as part of the state, this is an example here where you might have a family of things that you want to validate. Such validation is dependent on the state in the family. I need to reference it. This uses the instance. Again as it shows at the top, it preserves the X about as this in X.Y is Z. Generally thatā€™s not necessary for most custom matchers, because they are designed to operate on a class, but there might be corner cases where itā€™s necessary. So we have considered it as part of the proposal as it stands. I do think thereā€™s other options that can be pursued if we donā€™t have this, such as having string like the custom matcher V8 getter that binds through the instance that you care about. And the receiver isnā€™t necessary. Thereā€™s overhead with that. This is something we are debating, but including for consistency, but something we may decide is something we may not need and cutting that capability. + +RBN: To continue. One thing that came up in the February plenary was the performance of extractors. There is a concern that since extractors also use a structuring, they have the performance concerns that currently, and array destructuring the prototype. Itā€™s fast in all assignment implementations I have seen. Things like escape analysis, in general accessing non-named properties is faster than the numerous steps that are part of the prototype. One option that we might want to pursue is having implementations look at the speed. Itā€™s something that would benefit the entire ecosystem and everyone using ā€“ extractors, but benefit using react used state. We did some very rudimentary benchmarking, where the left side is array destructures versus saying `{ 0: value, 1: setValue } = useState()`. And found that it tends to be 30% faster than array destructuring. If we can improve the performance for those cases when we know itā€™s an array and can skip over using the iterator, when it would not be observable to the user, that benefits the entire ecosystem. Otherwise, return array returns an array as if an object under the hood. If implementations donā€™t make those optimization on arrays themselves, one way we considered moving this proposal forward is to continue using iterators as part of the proposal until such time as we can determine whether that optimization is feasible. If itā€™s not, we use array return values alternately, we chose to only array return values for now and then in the future, maybe that can be updated to support iterators, various thinking on there. I am choosing a route that raises the ā€“ that raises the tide for everyone, rather than extractors, but if we move forward with extractors thereā€™s the potential faster and pulling out the properties. This is more of a Stage 2 concern. I think the way we have is the right solution. And I think that the array iterator capabilities or improving array performance is something that engines will hopefully want to do and I am working on like putting together, convincing rationale to talk to engines about improving the performance here. So that this doesnā€™t become a blocking concern or doesnā€™t require a change to that behavior. + +RBN: So there were some other alternatives considered and we discussed, even in the plenary. C# destructure, have out parameters. It doesnā€™t allow for the underscoring argument. You have to use discards. In the context of destructuring, itā€™s a better fit for a typed language, Pythonā€™s pattern matching, use a match argument. Property names. It can get away with this. Python doesnā€™t evaluate user code but Pythonā€™s properties are not private. Thereā€™s no hard privacy in Python. As a result, you can access those things. One of the values of extractors is that you can have a private state that you only want to make available through extractors because you might not want that mutable or observer without them and forces people into a certain pattern development. Like a `option.sum`. It doesnā€™t have the type of type checks you want to ensure, therefore you might want to say, you must use `const option.sum` or use it in pattern matching to push people towards the way itā€™s supposed to be used. + +F# active patterns use tuples similar to array objects. It doesnā€™t permit undersplaying arguments and requires discards which is not a ā€“ convenient in the destructuring case, where it doesnā€™t require exhausting the array in destructuring. + +RBN: So I will show some examples of this. I have shown the point example before. One ā€“ I have shared this where the get X and get Y are not part of the class. So that the extractor becomes the only way to access this data. Or maybe this is the way to access the data raw, where the public form (?) some type of manipulation on the data. You might not want to have or observe. Therefore extractors give the benefit of having a mechanism that gives you privileged access to this in a read-only fashion. Access that private data, but only if the type matches and leverage the benefits of the extractor mechanisms for the consistent exception that throws when something doesnā€™t match the provided site. + +RBN: Another example I have seen here is implementing algebraic data types without enums. You could have that. It wonā€™t have performance benefits, but shows the syntax, the mechanisms we employ are valid and applicable without having the syntax in the language yet. This does the destructuring and pattern matching and the customization around pattern matching and destructuring here that do that type of validation are important, if I was matching on a 0.1 and 0.2, being able to do validation of type is important to it can distinguish between these two things. + +RBN: Some other examples are common examples we have seen in places like having a pattern that matches against text. I forgot the let. Letā€™s assume you might be writing a specific string that is defined elsewhere. That are examples ā€“ this is how a pattern matching uses variable pattern to indicate a binding declaration. These are examples of some of the uses in that syntax. + +RBN: Regular expressions can be used in this as well. And pull it out and use it to more easily reference things rather than having to do dot groups, deeper within a destructuring pattern, et cetera. + +RBN: So I will go to the queue at this point and see if thereā€™s potential for advancing to Stage 2 + +EAO: Thank you for the discussions over the weekend on this. They helped me see the valueOf extractors, even if the value I see in extractors is maybe not the value that is intended, I really see this as a really possibly some closest or best approximation to runtime types in JavaScript. And I am not really that interested in the destructuring part of the proposal. But what happens if you just get one value out of the custom matcher and the way that this effectively (?) we have runtime type checking. + 1 for Stage 2 for exploring what that looks like. I thought about the syntax, but I think those can be addressed later + +RBN: I will say and I said this on the posted issue tracker as well, I donā€™t think necessarily that extractors are the right fit for runtime typing. Generally two things you want out of runtime typing. Assertions that a value is of a type. Itā€™s hard to do in JavaScript, consistently. You want to generally be able to at least validate the inputs and outputs are correct. There is no mechanism for validating something is the right type when creating a variable. For example. So there is some potential with extractors, but you canā€™t say, extractors are a runtime type mechanism, the benefit for most other scenarios is the pulling things apart. Itā€™s not a general purpose solution for runtime types. The other thing you want out of a runtime type solution or many scenarios, is reflection and introspection. This is important around dependency injection, OEM for databases. Runtime type validation for passing a callback. How do I know itā€™s the right to be applied without evaluating it. A better thing for that done parameter decorates, it use to validate return types. Thatā€™s a better approach for runtime type validation. You would definitely be able to say, X is point to check the type. So you can say instanceof, the custom matcher syntax is designed to work around or improve the state around instanceof which donā€™t work well across realms. And primitives, take array from another frame or realm and say an instanceof array in your context that doesnā€™t work. So you donā€™t have to have the complexity in pattern matching and deals (?) something that is considered more of a work in the language for a while. I definitely think that thereā€™s some potential avenues for extractors, I donā€™t want to turn the focus to runtime because I donā€™t think this is the best proposal for that case. + +So the next ā€“ + +USA: Yeah. EAO, is there anymore? Next we have DE. + +DE: Yeah. I am very excited about this proposal. I think the use cases you presented are very good and I like the semantics. I support this going to Stage 2. There are obviously these interactions with other proposals like pattern matching and so I think we can make sure to maintain alignment during Stage 2 with the right sort of, you all have already been explicitly collaborating. So thatā€™s a strong mechanism with the alignment. Yeah. Great work. + +GKZ: Hi. In general, I am excited and supportive of this proposal and pattern matching as well. My comment is more about the tradeoff between the flexibility of the design and understandability and looking at the use cases in the proposal. A bunch of use cases around I have some kind of datatype, for example `Point` and I am matching to see that this is a point, and then getting values out of it, and this seems well motivated in how this works in other languages. But some other use cases are running arbitrary code, and doing something, for example, on page 10, ID extractor, parsing a string. Getting the output out of that, so on one hand the first set of use cases, extractors is the dual of creating a data type. And then this is sort of a dual of just function call. And thereā€™s a tradeoff between this being more powerful and more flexible but also can make it more difficult to understand per the ā€“ and programmer. So looking at a piece of code when using a bunch of these extractors, you are not going to be able to tell, for example, your geometry example on page 16. When I am looking at the different lines, I donā€™t know necessarily is this ā€“ you know, I can assume this is checking, this is ā€“ this is of this specific datatype and extractors values of it, but I canā€™t necessarily make that assumption without looking at the runtime code because this could also be doing something completely different. For example, OCaml, I use that every day, it doesnā€™t have this increased flexibility and I am not necessarily ever missing the fact that this increased power is missing. You have some use cases around regular expressions and mostly around parsing strings. These can be solved in other ways. For example, calling the function and matching on the output of that function. You can imagine, here rather than matching directly with the regular expressions, you call a function, which calls the regular expressions and outputs a datatype. Maybe when we the outputting ADT or a union of object types and matching on that instead. I am just concerned about the understandability when you have the full power to do something fully arbitrary (?) + +RBN: In practice, in most examples that I have seen of actual real world code that uses similar syntax, you tend not to get very deep layers of extraction just like any that is more complicated, you want to break it down. In general, I usually say, I see 2 layers like an extractor being used. Itā€™s the ā€“ let me see if I can find the example here. Itā€™s this case where you have `option.sum` of geometry.point. Where you are something that might be an option.none or it could be `Option.Some` of one of those two things. And you want to break that down. Itā€™s more generally more efficient to collapse this down especially since thereā€™s the pattern matching proposal using for caching it increase performance and allow iterators to be re-used across matches ā€“ but the potential the `Option.sum` is the extracted ā€“ reused since we have validated that in the first branch. The other thing that is really interesting is, if I go back to the regular expressions example, let me see. That wasā€¦ this one. Is that if you have a piece ā€“ a function that says I am going to validate these inputs and do the destructors, the first has to match ISO DATE-TIME. If that succeeds, I can do the return zone thing undefined because that doesnā€™t work, what throws? You will end up with a error indicating that undefined canā€™t be further destructured ā€“ I canā€™t remember the exact spec text in the engines. But I have seen false canā€™t be iterated. And that's bad user behavior. So now you have to write your API that even in the failing case I have to return to something that is useful or on the customer side, I have to handle it, having a failed return value. So it adds more work on either the side of the implementing function or the side of the person who is consuming it. That extractors help alleviate because the process of doing the extraction, doing the validation and the extraction happens step-by-step. When the error is thrown, it says, this doesnā€™t match. Itā€™s more readable and usable then just `undefined` is not a valid ā€“ not an object or whatever you end up with in those cases. + +GKZ: Regarding what you first mentioned, I definitely clarify, I am suggesting that this should always be applied to I have some kind of datatype. And it should always be the case where I am saying, this is of that datatype versus whatever weā€™re taking a string and destructuring that. + +RBN: I think the majority of the cases will be that. But I have seen more than enough. The cases I have had around things like ā€“ I donā€™t think itā€™s currently in the slides, but an example of a book example that takes into multiple arguments and does coercion to an instant. Thatā€™s based on an actual code that I had written. It is one of the motivating reasons why I started looking into the proposal in the first place. I want to do the validation and save the complexity to handle defaults and all of the cases and one of the validate is a string. I want to be able to do data manipulation to make them consistent and case and thatā€™s not the 95%. Since itā€™s based on Scala extractors, work is flexible enough to handle both cases even if the majority of cases are data types. + +GKZ: We can move on to the next point of queue. Having the 5% use case supporting, it means that understanding code that is using this feature will be more difficult for the programmer to understand what itā€™s doing because they canā€™t make the assumption that this is always matching, this is of the datatype, but it could also be doing some kind of completely arbitrary thing as well. So somehow, having more power makes it also harder to understand what itā€™s doing. Especially in the match use-case, you can imagine someone will want to know they exhaustively checked the input value, if the programmer is looking at the code or a static analysis tool. And this could either ā€“ each extractor case could look at we have matching this value. Now moving to the next datatype and we have matched this or just doing something ā€“ I agree that is more powerful and thereā€™s definitely use cases for it. Thereā€™s a cost and tradeoff to allowing that in terms of understandability. + +RBN: Yeah. I would like to get to Mark Millerā€™s topic. I have thought about that. If you can ping me after, I would appreciate it. + +MM: Hi. First of all, my compliments plus a thousand of these. I want to see this in the language. When you refer to extractor in a destructuring, and you still use it refutable, I want to clarify: first of all, allowing extractors in destructuring makes destructuring refutable in the sense it can fail. My question is, when it fails in a destructuring, it always ends up causing an exception to be thrown. Correct? + +RBN: Correct. And I also think I said refutable when I meant irrefutable. + +MM: Okay. + +RBN: Extracts are always ā€“ destructuring some binding assignment patterns are always refutable matches. They throw. Thereā€™s no alternative solution. ā€“ the only refutable thing that introduces a pattern in that case is since the person has had a successful or failed match. But refutability is an extreme part with lines for how it works in other cases. + +MM: Great. So my question is, if we extend the hint which I would prefer to have been able to get hid (?) it, so the extractor can tell that itā€™s used in a context where failure would cause a throw, then it could itself decide instead to do a throw with a better diagnostic, which my only experience with pattern matching library, which including one I wrote, having the individual matchers throw individual diagnostic is value. It also raises the diagnostic issues. + +RBN: That is a fair point and worth discussing. The hint of the Boolean list, if the thing is further destructured. I think thatā€™s valuable for discussion. + +MM: Okay. Good. + +USA: Ron, that is like the end of time. Unfortunately. There are a couple more items. Also, there was a reply by WH who said that he agrees with the concern that was raised earlier by GKZ. But how would you like to proceed with this? + +RBN: At this point, we are short on time, but I would like to potentially ask for consensus on Stage 2. I think a lot of these things are ā€“ most are relatively minor things to discuss whether they apply to non-datatype is something I am in favor for (?). This is based on how ā€“ we have all the semantics described we are willing for. There are a couple of things to clean up in the specific syntax as we go. We have met the bar for Stage 2. Is there support for Stage 2 + +WH: I have concerns which we didnā€™t get to in the queue. + +USA: Waldemar, to clarify, do you think your concerns would block Stage 2? + +WH: They should be addressed before Stage 2. I like the proposal, but I have some concerns. I think we should go through the queue before asking for Stage 2. + +RBN: If thatā€™s the point, if thereā€™s any additional time on the agenda over the next few days, you would like to consider an extension to discuss that in the future. I think we are probably breaking for lunch soon or now. + +USA: Yes. So, RBN, thank you. We will capture the queue and put an extension on the books. I cannot promise you we will get to it by the end, but if you manage to preserve enough time then we will. Thank you. + +RBN: And if the folks on queue would ping me off-line, we could have an off-line discussion and speed up the discussion when it comes to it. Thank you. + +USA: Thank you all. Okay. Then we are on time, slightly. 3 minutes behind for lunch. See you all at the top of the hour. + +### Speaker's Summary of Key Points + +- Aligns with Extractors in Pattern Matching +- Proposal specification text written +- Destructuring performance investigation: may switch to array-as-object destructuring in Stage 2, preference to push for array/iterator destructuring performance improvements in engines as there are more benefits. +- Continued on Day 4. + +### Conclusion + +- Continued on Day 4. + +## eval/new Function changes for Trusted Types as Normative PR or Stage 3 + +Presenter: NicolĆ² Ribaudo (NRO) + +- [PR](https://github.com/tc39/ecma262/pull/3294) +- [slides](https://docs.google.com/presentation/d/14AjvbW2-aNvlirB-7pPhIAeM7oWKtjqONRcyp9ID7gg/edit?usp=sharing)) + +NRO: So hello, everybody, again. This presentation is for a new proposed change to how we handle eval and `new` function. To accommodate the trusted types proposal. For what is trustedTypes? It traces back that allows using language features that normally would enable access syntax. So you can decide who you like or do not. Some are HTML elements or set in the script. And done through two different ways. One to capability object. You can have an object that represents the capability to inject arbitrary code and through a global sanitizer hook. When you use one of the features, HTML like without using the proper capability object, these global is called sanitize HTML code to prepare it to be injected. Or it can three an error to prevent it from being rejected. + +NRO: Yeah. So an example of how this works is that we have this trustedTypes global. And we can create a policy. Not everybody can create policies. The policies we can create are controlled through the header. And only code that has access to the trusted policy can create strings that can be injected. From this for example, the trustedHTML object is created by somebody that had access to the policy set works. While trying to just inject from the string. And the other way that trusted types work is mechanized through global hook. You can define the default policy, only when the header allows it you can sanitize the code like in this example. In a way you consider safe. + +NRO: Trusted types will also send violation reports to predefine the point for when e somebody to trying to violate the policy. So if you have a trusted policy that says you just use trustedTypes, but then some code is trying to not use those, itā€™s trying for example to some set HTML to an untrusted system, it will do a violation report to the end point containing a sample of the code that tried to violate the policy. + +And I am showing here the HTML endpoint version, but itā€™s possible to register a global event listener to receive the same data. + +NRO: Trusted Types is already implemented in Chrome. Itā€™s implemented a few years ago. And Igalia and Salesforce and Google are currently working for other browsers. + +NRO: So how does this ā€“ I must talk about HTML now. How does this interact with ECMAScript? Like, why do we care? Well, we provide two very powerful functions that collect eval arbitrary code eval and new function. And trustedTypes proposal is like looking to ā€“ itā€™s like trying to hook into those. But we donā€™t provide the necessary host hooks for this. The Trusted Types spec actually already provides this info and based on another version of the proposal. Can we actually expose the info that trustedTypes needed so they can do their thing in eval new function? + +NRO: So what does it need exactly? Two main parts. First of all, they need a way to tell where the strings that were passing the HostEnsureCanCompileStrings can come from. For example, passing an object in the fire trucks, like a trust the type object, new function will stringify and the host only receives the string and have no way to check whether the string was like ā€“ first of all, whether the string comes from a Trusted Type object. And whether the string was like matches the confidence of that object. And second, we need a way to tell `eval` that it should not be immediately return for some specific object, but instead should consider dynamic strings. `new` function already stringifies all its parameters, while eval will return or leave whenever this is a num parameter. Trusted objects are what objects. And lastly, trusted types have the sanitizer hook that lets you transform HTML on the fly. They are not currently looking at this to `eval` function. It was in the old proposal, but we donā€™t do it now. Because you cannot really easily sanitize the code. If we want the capability, we can talk about it. But itā€™s not something we are proposing right now. + +NRO: So what changes do we need to do in ECMA262 to expose this? There are ā€“ so the way to change the HostEnsureCanCompileStrings hook again, currently the hook is receiving the realm. The strings of the parameter strings and value and whether itā€™s a direct or indirect call. For trustedTypes, you need it to expose the original objects. So we can comment here parameter args and body args. The values we pass to the ???er before stringifying them. + +And then for the violation reports, we need to pass the `compilationSync`, `compilationType`, constructed for using to try to validate some code. So whether itā€™s directly valued with the validate company to info this function and the full code string that we are trying to compile. So they can take samples of it and send it to the server and report to make it easier to figure out where the violation is coming from. + +NRO: And we need to change how the host hook is being called. This is the original function of the function. The current version we are stringifying the arguments body, code = hook. Be able to source, parse it and create the function. We need to like ā€“ I will just add the new parameters to the host hook and we need to be able to source the hook to pass to it. + +NRO: And we also need to change how `eval` works. So we need to have a way for objects to be marked as ā€“ hey I want to be stringified. The current request is doing that through a host defined code, like an internal slot that hosts can attach to arbitrary objects. And so eval needs to check if we have one of the special objects if we do `stringify`. Otherwise, do the existing behavior that is return, if the parameter is notAString. + +Following instructions from Monday about import source, we discuss how we want to set a precedent that arbitrary host objects should not communicate what they have with 262 with custom internal slots, but use a host hook. I am proposing we should have these hooks that hosts can use to mark objects to be stringified. + +How does this integrate with the ShadowRealm proposal, Stage 3. First, itā€™s not currently exposed in ShadowRealm. If you call eval in ShadowRealm it follows existing rules. It might be exposed in the future. Like there is ā€“ I donā€™t think there is a technical reason for doing it, but not currently exposed. There is a case in which new changes would apply when you call the shadowrealm.evaluate method. It should come from the outer policy. But it just means that we need it properly wire the objects in a ShadowRealm.object + +NRO: Itā€™s currently in a pull request and ECMA262 repository. It was based on that dynamic code branch proposal. But has been done a lot in the past few months and just presented by a colleague as a pull request. And you can have this specification. + +NRO: So what are the tests for this? This is difficult to test in 262, because the only behavior change we have is that sometimes code that might throw now might not throw. Whether the code throws or not, there is a potential that there could be an object when passed through eval is not returning as exist. + +NRO: I tried asking test262 maintainers what is better about this? They think itā€™s better to test it in WPT where you have access to the objects and so you can actually test the behavior when using the function. + +And that proposal is already tested. You can find the tests. + +NRO: Yeah. So summary for them asking here, I am asking for consensus on advancing the proposal, the version of the proposal that was at Stage 1, with this changes ā€“ updated host hook, to receive the original objects, the compilation and the source and update eval to allow to stringify some objects. + +NRO: I am looking for consensus for Stage 3. As I think all the requirements including tests are already met. Editors starting with the request but we donā€™t have a full review yet. If we cannot get the consensus on Stage 3 because we think it should have tests in Test262, even if we cannot fully test because itā€™s mostly host defined, then I would ask for Stage 2.7. Letā€™s go to the queue + +CDA: MF? + +MF: I raised some of this on the pull request originally thinking it was an editorial thing, but realizing itā€™s actually a normative question. So right now, there's just a Boolean-returning host hook that determines whether to `ToString` the object. And two related questions here. Could we have a precomputed string associated with each of the objects, instead of computing the string each time eval is called, that way we get a consistent string produced for each object that applies here? And second, if we canā€™t do that, can this operation throw? We are using it with a question mark there. Are we allowed to use an exclamation mark? + +NRO: Yes, we could and first not go toString and instead have the host hook on the object. The reason the pull request is used to string right now is just because the `new` function already stringifies arguments and so it was like ā€“ did the same in `eval` for symmetry. But there is no technical reason for which we could not just more statically get the string rather than calling string on object. This could return ā€“ it still checks that the string that youā€™re passing to HostEnsureCanCompileStrings matches the string that was contained in the object. So this is not a security problem. Yes if there is a preference for just not calling the string we can do that. + +MF: Yeah. Even if it just went down to like editorial preference, I would prefer to not have to call ToString there. But on the normative question, I prefer not having to do a `ToString`. + +NRO: Okay. And do you propose we only do that for `eval` or that also function for this special objects we not calling `toString` + +MF: I would prefer to do the same way, that they both do not call `ToString` + +CDA: KG? + +KG: I agree with MF. The point that I was going to make was that calling `toString` would be bad for security because you could override. You say it checks that the resulting string is the one they expect, but given that we are doing that, given that maintaining the security property requires CSP to have a notion of what the correct string is, it seems silly to call `toString` and then confirm that the results are the same. + +Especially given the calling toString is observable, you could override to have side effects and then return the string. And thatā€™s just weird. So I also strongly prefer to just pull the string out of the slot and not call user code here. For both `eval` and `new` functions in the case of these branded objects. + +NRO: Okay. Yeah. We can update the proposed spec change to that. MF, youā€™re gone in the queue? + +MF: This next one is a little bit more hairy, I think. This PR exposes a synthesized source text for functions via this host hook. For use, I believe, correct me if I'm wrong, for confirming against hashes for CSP. When we defined Function.prototype.toString, well, sorry, when I revised Function.prototype.toString many years ago now, we were very careful not to fully define it so that it could match the web reality at the time for all of the web browsers. But my understanding is that this will make a canonical synthesized toString representation of a function that doesn't necessarily match Function.prototype.toString in some browsers. + +MF: That makes me uncomfortable and also then because of that question it also makes me a bit uncomfortable with how quickly we're looking to move this through the stage process. We're looking to go immediately to stage 3 here. + +NRO: Yeah. I was not aware that the `toString` ā€“ that the string would build a new function is necessarily not the original string. And from what I discussed with this before, with the trusted folks, I donā€™t think they were aware either. + +NRO: One of the options that they were thinking of was to ā€“ like, instead of us, us in the string, from like to the host hook, it would just like on their side show the various parameters so build this ā€“ + +MF: Yeah. Thatā€™s what I was thinking would be a solution here. Because all of the components that are necessary for that are passed out, it doesnā€™t force us to make the decision for what that canonical `toString` representation is and allows the upstream to maybe base the hash on ust the body, or just the body and the parameters, but in a way that's not like a syntactically valid function construct, or whatever they want to do, that would be out of our hands. I don't want us to be the responsible party there. + +NRO: For additional context, this proposal is right now only implemented in Chrome and Chrome doesnā€™t use the string as it is. But it strips the function part at the beginning. Because building these reports, they only get the first 40 characters. And so you won't do this ā€“ it contains something significant and not like the function prefix. So they are already not using the string as is. So I believe we can just ask them to take responsibility for building the string that they need. + +MF: Yeah. That sounds great. Thank you. + +NRO So to clarify, this would be removing the code string parameter here. The spec passes the individual pieces and the host can join the pieces by themselves. MM? + +MM: Yeah. I have written down on the TCQ something that I am concerned about in this `toString` discussion. I thought I understood this and now I am much less confident that I understand this and its implications. If you pass ā€“ if you just call ā€“ under this proposal, if you just call `eval` and pass it object as argument, what happens? + +NRO: In eval will call, the host will then propose trying to reduce these host `eval` ā€“ the host using for any object hook. This hook can return either true or false. If the hook returns false, then nothing changes. And we still have the pre proposal behavior, where eval just returns the object as it is. If this hook returns through, where given the first discussion with Michael and Kevin, the hook would intern the string, a Boolean. + +MM: The string comes from the host hook called during `eval`? + +NRO: Yes. + +MM: I see. I see. Okay. I am comfortable with that. The ā€“ then itā€™s completely a host policy, where it gets the string from. + +NRO: Yes + +MM: Also I wanted to ā€“ so I think that this is ā€“ I think I would still like to see this only go to Stage 2.7 at this meeting. But I support it for 2.7. I do have one for question, though. You mentioned that realms don't deal with this yet in the realm evaluate. Compartments also have a compartment evaluate. And compartments obviously even earlier than ā€“ much earlier [shan] ShadowRealm, I want to make sure that it ā€“ since you know compartments well, that compartment evaluate this like compartment would. + +NRO: Yes. I am not super confident right now the exact behavior in the compartments proposal. + +MM: Okay. + +NRO: So I would expect that if you call eval inside the compartment, it would call the hooks inside the apartment. While if you call eval method in the compartment from the outside, it would check for permission from the outer compartment rather than the inner one. + +MM: That sounds very plausible. And I like the question. I think the question is definitely a ā€“ the right way to approach it. The other thing with regard to the hook, which compartment the hook is with respect to, is one of the long-term goals of the compartment is host virtualization, virtualization of host hooks, so the JavaScript can act as ā€“ JavaScript code in the position of creating the compartment can act as host to JavaScript code within the compartment. Do you see any conflict with that regard to this proposal? + +NRO: No. I think we just have to expose this new hook. I am on the list of hooks that compartments with override + +MM: I see MF thinks I might be confused. + +MF: Yeah. I wanted to clarify something. When you were asking about the string, thereā€™s kind of two strings and two host hooks involved here. And we were showing you one here. Yes, this returns a string for eval. The other one talking about was in function constructor, where ā€“ + +NRO: MF? That one is also here on line 5. There is a hook to get the string and a hook to verify whether we can perform completion or not in eval we are using both + +MF: The interesting case in the function constructor where we synthesize a string, not just use what is returned by this host hook. + +MM: Okay. But the host hook, the string that it returns is then used in order to construct the string thatā€™s then passed by back. Is that correct? + +NRO: It is used as a portion of it. + +MM: Right. Okay. Good. Good. That makes sense to me. So I still support 2.7. + +MF: Okay. + +NRO: Okay. Just a question, MM, technically, relating to 2.7 and 3 is just regarding tests. Can you like ā€“ + +MM: Yeah. Maybe I am hanging more on the distinction between 2.7 and 3 that is intended. But once itā€™s a 3, it becomes very, very expensive to make even minor adjustments. Whereas, sort of understood that the gap between 2.7 and 3 means that if we find ā€“ and obviously even if 3, we find a problem that is bad we find a problem and fix it before it goes to 4. But I am okay with 3, if thereā€™s no ā€“ if everyone else is. I think I prefer the extra little room there. + +NRO: Okay. Thanks. SYG? DE maybe first since heā€™s replying to this. + +DE: I think for the 2.7 versus 3 thing, we were kind of explicit in setting that up, that we are not adding extra stages of incremental consensus-gathering. So I think ā€“ + +MM: Okay + +DE: I would be hesitant to start in that direction. + +MM: Okay. + +DE: If we are holding back Stage 3, I want to know exactly what we were looking for + +MM I agree. I am fine with 3 and I am fine with 2.7, if thereā€™s ā€“ if other people would like 2.7. + +NRO: Okay. Thanks. SYG? + +SYG: This is a clarifying question. I didnā€™t quite understand, if you could go back to a previous slide. Let me try to find this slide number. I think slide 14. Yeah. Number 1 there, how is the change to host ensure compile string address number 1? + +NRO: Because we are passing the original object to the hook, so like the ā€“ by telling ā€“ wait. By where does it come from and then giving waive to the hook to check whether the string was originally the trusted object. + +SYG: How does it ā€“ know that? + +NRO: It says the object is ā€“ it's a TrustedScript object, I seem to have a mark as such. Some host level lock + +SYG: What is the difference ā€“ okay. Itā€™s not ā€“ so maybe I am misunderstanding. Is the use that you are trying to evaluate a string and you want to know ultimately the providence of string or evaluate something is an object that needs to become a string and you pass the object in addition to the string + +NRO: Yes, the second thing. The example is first. But itā€™s the same. Trusted HTML is an object that has the internal slot marking distrusted. And so we call `eval` or `new` function passing this object to the hook so the hook can verify the string, but one of the marked objects. + +SYG: Okay. So making sure that itā€™s not like about building some kind of tank tracking into strings, which is not possible + +MM: There is no internal slot being brought into the proposal. Itā€™s a hypothesized one to bring the host, if they want it? + +NRO: Yes. + +MM: Okay. Great. + +CDA: Okay. Time + +NRO: So I would ask for consensus for Stage 3. Mark, I will follow up with you with a compartments question to make sure I didnā€™t understand wrong + +MM: Great. I am fine with Staged 3 even with the follow-up happening later. + +NRO: Okay. + +CDA: Okay. MM. supports Stage 3. Do we have any other support? + +MF: Can you clarify Stage 3? + +NRO: Yes, with the changes. So Stage 3 with this host hook in 2, boolean return I guess either null or a string. And if it turns a string, we use the string instead of calling `toString` on the object. And second change, we are ā€“ we are not passing this last built string to the host. Instead we just tell trusted folks to build the string themselves because we cannot guarantee a stable format. + +CDA: Okay. We have + 1 from MM. The + 1 [#23R] DE. + 1 from MF with those changes. Do we have any opposition to Stage 3? + +JHD: I was just curious, what is the reason why this is a Stage 3, instead of Needs Consensus PR? + +NRO: Yeah. I would be fine with either. I actually ā€“ + +MM: I would object to a needed consensus PR. + +NRO: Yeah. Like, these are Stage 1 proposals and immediately get to Stage 4, it failsā€¦ + +JHD: A Needs Consensus PR would still need some of the requirements of a proposal. So I guess thatā€™s fine if thereā€™s an objection it being one, it can be a Stage 3 proposal. MM, I would love to hear more on Matrix why itā€™s one or the other just for future rubrics. + +CDA: Do we need reviewers? + +NRO: Yes. Does anybody want to review? Itā€™s a maybe 20 lines long pull request. + +MF: I mean, I am going to be reviewing it anyway, so you can add me to the list. + +CDA: Already. Thank you, MF. Anyone else while we are here? You can always volunteer later as well. Okay. Letā€™s move on. Thank you, NRO + +NRO: Thanks. + +### Speaker's Summary of Key Points + +- To support integration of Trusted Types with `eval` and `new Function`, it is proposed to expose to the host: + - The original parameters passed to `new Function`/`eval` + - For `new Function`, the final string built by concatenating the various parts +- We also need to expose a hook to let the host marks some objects as "should be stringified in `eval()`", given that it currently returns all objects as-is +- It's very hard to test this in test262 given that it's about host hooks, but it has tests in WPT +- Exposing the full string in `new Function` is problematic because it's currently spec-internal, and instead host specs should rebuilt it by themselves +- The general approach from the committee is that we should expose to Trusted Types whatever info it needs, and let them decide how to handle questions such as cross-realm behavior + +### Conclusion + +- This change has been approved as a Stage 3 proposal + - Without exposing the full string for `new Function` + +## Array.isTemplateObject next steps + +Presenter: Daniel Ehrenberg (DE) + +- [proposal](https://github.com/tc39/proposal-array-is-template-object) +- [slides](https://docs.google.com/presentation/d/1LTlzpboYwKxRwigATcFYEh06CIbZvOvmFdPzkNn7vJI/edit#slide=id.p) + +DE: Hello. I am going to build off of Nicoloā€™s presentation. + +DE: Thanks for waiting, everybody. Array.isTemplateObject. This is a proposal for a simple predicate, which we stash on Array because template objects are arrays. And it just lets you know what is passed into it was a template created by the system. If you make an object that looks like a template, you wonā€™t trick the system. It will have an internal slot that denotes whether it was originally a template.. + +DE: So this comes back to the trusted types discussion that NRO had. Itā€™s almost like the other side of the coin. When weā€™re in `eval` and have these objects that we want to pass to `eva`l that are especially marked how do you create the objects that are trusted strings? + +DE: So trustedTypes is all about avoiding injection attacks. And injection sort of itā€™s about when you are running code that you donā€™t want to be running. Kind of presumably because you have to presume something, the actual JavaScript code that you ship over, you know, of course you can imagine ways that you could do injection in the server, into the JavaScript code, but hopefully someoneā€™s looking for those. Once you have something in the JavaScript code and itā€™s a constant maybe you should be able to trust this. + +DE: This is what the Google team found that in their experience it was common to have constant values, HTML snippets, stripped URLs and sometimes JavaScript. So one strategy in your type trusted type policy you could check for membership in a big set of expected literals. But itā€™s easier to manage if you donā€™t need to build a process if the literals can be distributed through code. And because you trust the code that you are running, which is another assumption, but we have to build on something. Then itā€™s ā€“ it is kind of easier to manage, if those literals can be distributed throughout the code. This is what Array.isTemplateObject enabled. This is the block in the previous discussion. The initial spec text used an internal slot for the brand check. But this is cross-realm. We have many TC39 delegates interested in enabling proxy transparency. + +DE: And especially I think practical proxy transparency, which I will get into more later, itā€™s ā€“ I think it's a slightly more slippery concept because weā€™re thinking about what is used in practice. + +So imagine you have a template tag that is doing this check thatā€™s going to brand your JavaScript string or HTML snippet as literal so that later can be accepted by the system in one of the syncs. + +DE: So what if we put that tag through a membrane? Well, it wouldnā€™t really work. Because itā€™s not checking the brand on the receiver, but on the argument. But there are two ways through.. One of them is the realm-specific way, where weā€™re just looking within the local realm, where that template object function exists. And looking at its TemplateMap and seeing whether the object is a member of that. Another option was to be like array.is array and go through the kind of target of the proxy. So the TemplateMap version, Igalia tried to implement it, but had some trouble to get it to work well. And so the feature got delayed. The initial version as things stand may go ahead without the literal thing. It makes it harder to deploy. So itā€™s in general a more insecure world where injection attacks were more exploitable. Serious secure issue to consider. + +DE: However I donā€™t think we should worry about this so much. Because the realm dependent brand checks fail safe. We were previously thinking, maybe we should do the realm dependent realm check where you look at the TemplateMap and check for membership of that. Is this object passing in a literal and from my realm? The realm-dependent check, checks these loose concepts. Is it a literal? For our use case here we really need to know whether itā€™s a literal and array in the realm so to conserve the consistency whether the tag works across membranes. If we do it early the tag would not work across membranes. And with this check being done kind of more flexible, well, okay. It will break across its ā€“ sorry. With the realm dependent check would never work across realms. And the realm dependent check would break in the cross-membrane case when put across the membrane, but not locally. + +DE: So anyway, we are failing safe because we are ensuring either way the literalness. Itā€™s all about kind of telling the user kind of more eagerly that you canā€™t do this thing thatā€™s cross-realm. So if you run into an issue, you can use the template tag that was created with your realm. Been your realm. What is the independent version forces you to do. The realm independent version, you would discover that requirement when you tried to use a membrane. I donā€™t think itā€™s that bad. + +DE: If we want to have a realm dependent brand, then I think we should do it with a less confusing spec. Like, authors who tried to implement this previously, they thought, okay. I will physically look up the template map and iterate through it to see whether it has the membership, which is why the implementation was impractical. Instead, a more practical implementation strategy that Nicolo pointed out to me, well, just make the internal slot and have it point to the realm. So previously, I guess our templates tried pretty hard to not point to the realm. Now you have to go back and do that again. + +DE: So and then in Array.isTemplateObject, we compare that to the realm of the `isTemplate` function. So in conclusion, I think itā€™s important that we think globally about these security matters. There are many different threat models we are juggling. Defending against injection and defending against different untrusted pieces of code affecting each other. Those are both threat models and we shouldnā€™t kind of put one above the other. We definitely shouldnā€™t rank the web security model too low. + +DE: So anyway this proposal is at Stage 2, but it was blocked up for this issue. I would like recommendations from the committee. We should think globally [?] same-realm versus cross-realm brand checks. Any other concerns? I would like to propose this for Stage 2.7 at the next meeting + +CDA: KG? + +KG: Yeah. Thanks for the presentation, DE. I support the semantics you have proposed with the cross-realm brand checks. For some reason, I thought that the concern about that was somewhat different though, which is that I trust code that is on my page, maybe more than I trust code that is on other pages. So it ā€“ I thought that the concern was a security thing. You know, if I have evaluated all of the code that is on my page, and checked all of the literals that are on my page and confirmed that they do what they want in the context of my page, that doesnā€™t necessarily mean that I have checked literals on other pages that happen to share the process with me. + +DE: I see. + +KG: We could say we donā€™t care about that. I think on balance, the simplicity argument wins out over that. But I wanted to make sure itā€™s something we considered. + +DE: Okay. Thank you. Any more on the queue? + +SYG: I want to reply to Kevinā€™s point. The realm ā€“ the JS realm notion is a finer notion than origins. Do I trust literals on my page, do I trust literals on my page more from other pages? The thing that maps onto it is origins instead of realms. Does that match your intuition, DE? + +DE: Well, other pages can only communicate by like `postMessage`. And so they wonā€™t ā€“ this will be protected against that because this wonā€™t be template literals. + +SYG: Origins. + +DE: Even other workers will face that + +SYG: Maybe the realm ā€“ Communicate with other pages having an iframe on the page. + +DE: Yeah. So this sense of literalness doesnā€™t ā€“ itā€™s not the next flexible thing, but it allowances literalling to be communicated to and from iframes + +SYG: Okay. + +KG: I agree that concerns about cross-realm literals are much less serious than concerns about cross-origin literals. I mean, those are fatal, if you could do a cross-origin. Cross-realm I agree is a much smaller concern which is why I am happy with the brand check that does work across realms. But it is still a different boundary. We sometimes see CSP policies that are different for different pages on the same origin. + +DE: I see. + +KG: Just because you are in the same process, and the same realm, it doesnā€™t necessarily mean that you have the same assumptions about what you trust. + +DE: Right. Fair enough. Thanks. + +DE: Whatā€™s the next point? + +MM: First of all, I want to answer both KG and SYG, which is that the discussion of the utility of this has so far been completely browser centric. There is no ā€œJavaScript originsā€, itā€™s not part of the language. But everything outlined here is useful for security outside the browser. All the rationale for why you want to do this. Including, supporting trustedTypes, which are a useful set of concepts outside the browser and the concept of origin. + +MM: And that, therefore, the notion that different realms can be trusted is relevant. Now, the first question I have that will help resolve what we need to do in the spec is, if you do the ā€“ if the test was for a cross-realm internal slot like you have in the current proposal, and you also tested as part of the spec operation, what you do at current, in many of the examples, which is also that inherits from this realmā€™s Array.**prototype**. Since templates are born immutable, which they inherit from is locked in, it seems that conjoined tests safely emulates exactly the look in the TemplateMap. + +MM: So the spec only has to traffic in observable differences. So if conjoining the slot with the prototype check is not observably different, then the TemplateMap lookup, then we have implemented the TemplateMap lookup by bundling that into the test + +DE: Is this done at the build in definition of this or also be done at the user level pretty easily as you outlined. + +MM: I am suggesting it be bundled in so thereā€™s no operation that is not conjoined with the prototype check. And I am not suggesting that be specced that way. If itā€™s specced as TemplateMap lookup and implementation that does the conjoined task with the unspecified internal slot, is equivalent, then that implementation ā€“ that cheap implementation succeeds at meets the spec + +DE: We poured over this and couldnā€™t come up with your algorithm. It failed as a communication device. + +MM: Thatā€™s taken care of easily with a normative note. + +DE: What is a normative note? + +MM: Sorry. Non-normative. + +KG: We should focus on the semantics and let the editors decide how to write it down. + +MM: the observable semantics should be its realm-specific. And if itā€™s realm-specific, then having the exposed API be Array.isTemplateObject,, testing the argument, it does not violate the general design role that we have for good reasons of not having operations that test internal slots on arguments other than `this` argument. + +DE: Okay. I want to encourage the SES group to document and, you know, provide a written justification for these design patterns because you say weā€¦ YSV presented this idea that we would document invariants and work towards consensus on them. I think that would be useful here. + +MM: Yeah. Absolutely. + +DE: Great. + +MM: The ā€“ I shouldnā€™t promise about when we will do any kind of exhaustive documentation of any variance, because itā€™s just ā€“ itā€™s a hard thing that we keep not getting to. But absolutely, this thing about arguments versus this, thatā€™s been gone over many times and we can certainly write that down + +DE: Mark, can you clarify the thread model that you are concerned with here? + +MM: So, first of all, thereā€™s the issue that Kevin brought up, which is that the utility of this thing really is better served by having a realm-local check because the ā€“ we know that if eval is enabled the check doesnā€™t mean anything. If itā€™s a realm local check, loonize at `eva`l is disabled in your realm, it be enabled in your realm. I am not bringing origin into the conversation. And `eval` being enabled in your realm does not invalidate the utility of this as a security mechanism in my realm, even when our realms are in direct communication. Thatā€™s the utility + +DE: I am a little unsure of the prospect of defending yourself when thereā€™s injected code into the other realm. You can often construct a kind of gadget where you will kind of get the other realm to do something, even if itā€™s supposed to be defended. + +MM: Okay. We should go over that. Because I am convinced that that is useful, to communicate between the realms. In that context, I will also say, we specifically created the callable boundary in the ShadowRealm proposals so that the standard way to create multiple realms in the language would have a callable way. With the callable boundary, if the operable communication is over the callable boundary the problem is solved with the existing proposal, because the existing proposal would fail on a proxy. Which in this case, since the only way to get across a callable realm boundary is with a proxy or equivalent, that would basically enforce in that scenario that the test is always realm local. So therefore for uniformity, I would still recommend that it be realm local, even when thereā€™s not a ShadowRealm callable boundary in the way. + +DE: for uniformity, betweenā€¦ ? Okay. I donā€™t quite understand the not motivation from the uniformity + +MM: Because the scenarios we want to use realm as a trust boundary, are ones that align with using ShadowRealms rather than direct realms, I wonā€™t block all this API as currently specified. Because I am willing to ignore the direct realm to realm contact case as being one thatā€™s relevant for security programming. + +DE: Interesting. Okay. To summarize, it sounds like we heard a weakly held argument from KG in favor of the cross-realm version. And now, it sounds like your same realm argument is more weakly held. Given that ShadowRealm does this defense. + +MM: I am saying itā€™s not a blocking consideration. + +DE: Okay. Moderately strength held. + +MM: Thank you. Yes. + +DE: Great. Any further comments? + +CDA: NRO? + +NRO: Yeah. I wanted to say, if we are going with the same realm check, then whether this is locked or it remains as a map in the realm, itā€™s like an internal question. So we should do whatever is easier to integrate. That decision should not affect whether the design is good for not + +DE: Why would host specs care one way or the other? + +NRO: Well, itā€™s ā€“ like, if you need to check whether something is ā€“ itā€™s easier to just check and having to do ā€“ I donā€™t know. But itā€™s like an internal request. + +DE: Yeah. I think this goes back to making sure that our spec is intelligible. implementers are one of many audiences who are communicating with the spec. If we donā€™t communicate with them, we are kind of sunk. + +DE: Okay. Any more comments on in the queue + +SYG: Yeah. I have a question about the ā€“ so we discussed a lot about realm dependent or realm independent. What would Trusted Types if we specced one or the other? I imagine for realm independent they use it as it is. Realm dependent, would they enumerate all realms? + +DE: You are normally going to use the literal constructor from your realm and throw at the point you are applying the tag, if itā€™s the wrong one. Normal code that is like whatever namespace object it was dot, you know? To get HTML from a template, it will throw eagerly. It wouldnā€™t bother with the cross-realm thing. Ultimately, itā€™s not useful to share those template tags across realms + +SYG: I can believe that. If the trustedTypes folks think that by providing a realm dependent verse they would use that as is, and there is this corner, things would fail and thatā€™s a big deal, then thatā€™s fine with me. + +DE: Also, to be clear, one possible usage mode would be actually using this directly in your own tag, but then you have a separate policy thing which sort of grants the branding capability to that, you know, template tag, you know, variable. And it goes from there. It doesnā€™t necessarily have to be all built in and decided in one kind of way. But I think it would be nice if there was a built in policy that just worked well. + +SYG: Right. I think my ā€“ I donā€™t have an opinion on the ā€“ okay. I weakly prefer realm independent. But you shouldnā€™t consider my opinion on that matter. I donā€™t want to advance anything in the being used by trustedTypes if the point is to be used by trusted type. Get confirmation from what we decide is fine + +DE: Yeah. It sounds good. + +CDA: Mark? + +MM: Yeah. Agree with all that, including not blocking all ā€“ whatever trusted types insist on doing. However, I do prefer that ā€“ I do think the right semantics should only trust from oneā€™s own realm and the default in trusted realms would be the wrong default. + +DE: Okay. Any more comments? + +CDA: Nothing in the queue. + +DE: Okay. So to dictate a conclusion: I think the committee universally understood the relevance and significance of this feature. And seems interested in working through the problems. We discussed realm dependence versus realm independence. My understanding of the motivation for realm depends was inaccurate. The goal was really about making the level of trust even higher because youā€™re seeing that something came from your realm. That said, because ShadowRealm callable boundaries wonā€™t really let any of this stuff through, itā€™s not hard constraints. We heard very weakly held preferences from KG and SYG towards cross-realm just for simplicity. A moderate strength for the same realm. Everybody agreed either semantic is okay if needed, and the main determining factor, if they do have a strong opinion, is to meet the needs of the trustedTypes proposals. The next steps, I will go to talk to these people and see if they have a strong opinion, see whether this decision affects them and then come back and propose something for Stage 2.7 next meeting. + +DE: And also, there will be a lot of attention paid to the editorial question of how this serves as a communication device for implementers to help people not get tripped up + +DE: Do people agree with that conclusion? + +MM: Yes. + +DE: Great. Thank you. + +### Speaker's Summary of Key Points + +- Array.isTemplateObject is a proposal which permits the detection of original template literals in source code. +- This feature is useful in the context of Trusted Types, where in some contexts (CSP) it may make sense to trust HTML or JavaScript source more if it exists in a template literal, compared to something which is constructed later at runtime. +- Some delegates weakly hold that they would prefer Realm-indepence (for simplicity), and MM holds moderately strongly (but not enough for a block) that Realm-dependent is better + +### Conclusion + +- TC39 continues to understand the importance of this feature and is working to resolve the remaining small issue, which is about whether this test only returns true on same-realm template objects. +- DE will work with TT folks to learn whether one or the other alternative about Realm independence would be more useful. +- Array.isTemplateObject remains at Stage 2. + +## Module sync assert for stage 2 + +Presenter: Jack Works (JWK) + +- [proposal](https://github.com/tc39/proposal-module-sync-assert) +- (no slides) + +JWK: Let me recap this proposal. The problem to solve is some code must be synchronously loaded. And if the graph accidentally becomes async, by having a new top-level await in the subgraph that was not here before, the code might break. For example, service workers. An event listener must be run at the first time and if the developer uses a native module itā€™s fine because top-level await is disallowed in service workers. If they are using Webpack to bundle the service worker, then they might not find this problem because now the module is not an ES module and the main code might be deferred. + +JWK: And the second part is polyfill. Polyfill must be run synchronously otherwise the application might be broken. The third part for libraries authors, they may want to declare that their library as synchronous. So they wonā€™t actually introduce a breaking change. + +JWK: And the solution is simple: introduce a new directive at the module source. And if this directive appears and the subgraph of this module includes TLA, this is a linking error. + +JWK: So thatā€™s it. And there isnā€™t much design space for it. Thatā€™s it. Do we have a queue? Yes. + +CDA: NRO? + +NRO: Yes. So you mentioned that service workers already got the top-level await. This would only be useful forā€¦ compile to the passing? I think there is still value in having a way within the language to explain what service workers do. The way the current the top-level await is to ā€“ doesnā€™t it give a way to just like ā€“ define some point in the Modules where they can say, this is actually async? So letā€™s not evaluate it. But this is entirely outside of language and our language doesnā€™t give a way to express that. So it ā€“ I think itā€™s good for the language to have await to describe what the host is already doing. + +JWK: Thank you. + +CDA: DR? + +DR: So one of the things that ā€“ when we were trying to explore this on our team and understand the value, there was this sort of problem that came up whereas a library author, you may decide to say something like I want to use a sync assertion. Right? + +JWK: yes + +DR: and one of your dependencies may violate that. But a downstream consumer, someone depending on the intermediate dependency, may not actually care about that synchronous assertion. They may not have an issue with asynchronous with any downstream. There is one of the reasons I am a little bit against the idea of this proposal, that someone misusing this can actually cause problems for someone depending on some library. Where they actually donā€™t care about synchronicity versus asynchronous. I think thereā€™s a real problem with this kind of effecting any other dependent code. + +JWK: Maybe we can remove this example so this is no longer be a main motivation. But the proposal still have the value, I think. + +DRR: NRO has a response + +NRO: I disagree with you on this DR. I think itā€™s like if the library is using assertion, this will not impact the consumers that donā€™t care just because the library is not using the dependency with the top-level await. So the consumers donā€™t actually notice. They just happen to run code that is being written in a synchronous way. + +JWK: It will break the downstream users because this directive is applied on every module. So if your module is in the intermediate, and your upstream has TLA, this will break your downstream users. + +DR: Right. Yeah. And I mean, I am not talking ā€“ NRO, I donā€™t know if this is what you meant, but I am not talking about two independent dependencies, like package A and B that have nothing too good with each. It depends on package A depends on package B and that is asynchronous. The library was trying to enforce this characteristic and downstream consumers may not care. That was ā€“ my general stance is for something like this, the end developers and user developer is one who has to make the call. Which ties into the other point that I am going to make is, you will get an error one way or the other. + +So anyway, we can get back to that. + +NRO: SYG go first. + +SYG: Okay. I think we can get back there right now. Is the worry, DE, that a library would a ā€“ if a library only works if it is sync for whatever reason, and they put this assert, the upstream or the downstream consumers have no choice to care because the library breaks. Asynchronous including back asynchronous via a dependency. + +SYG: So for that case, it does only move the timing of the error up and I guess the argument is, itā€™s ā€“ better DX to fail earlier. But is there a worry that a library is, I want to be synchronous. Is that the actual worry? + +DR: I think, I mean, people will misuse it in some cases. Or, in the sense, thereā€™s sort of this rift between dev time and ship time and cut consumption time where you want this as a time tool and say, yes, none of the third party dependencies actually are asynchronous. And then when you ship, you probably donā€™t want this assertion at all. You probably want this as purely a dev time thing which is really something that you could catch with tests. + +JWK: Library authors can write this assertion in their tests? + +DR: If this exists, they should probably write it in a test rather than in their code. But that said, they can just write tests, do a synchronous assertion if that fails, you know that your library is not fit to be synchronously imported. + +JWK: Yes. I think thatā€™s a teaching problem. + +DR: NRO has a reply. + +NRO: How do you write a test for this? + +DR: I assume you would have to have some tooling that runs through it basic and asserts ā€“ there is no top-level await to builder has to run the [T*E]s. If there was an import sync or something, the runtime provided you might be table to do that. + +DR: Something along those lines. Or, for example, in no JS, itā€™s been spored you can require in a module. So you could require it in the module itself and if that fails, then it will be because thereā€™s a top-level await. + +DE: Thank you. Yeah. I support this. I think that every use case that is reasonable that I have reason for top-level await applies to entry points and anything that is not an entry point, I donā€™t understand why you use it. Top-level await is a breaking change with or without this proposal so I want to ā€“ I would want to ensure that like as Jack has said, my API doesnā€™t change on my, from underneath me without my test failing. + +CDA: SYG? + +JWK: yes. Itā€™s only for the module body. there is no meaning for ScriptBody to do anything with this assert. + +SYG: Sounds good. Thanks. + +CDA: Daniel? + +DE: because it ties back to the point a consumer will get an error one way or the other, whether the runtime or trying to do something with a synchronously importing mechanism, or because at some point, like SYG said this earlier. Some runtime capabilities wonā€™t be available when you first do it. Such as a service worker. So itā€™s ā€“ itā€™s really just more for explicit UX, I think. Which does make me kind of ā€“ you know, I feel very soft on the ā€“ likes, I am not sold on the utility. And then there is the hazard. So I think those are two things to keep in mind. + +CDA: NRO? + +NRO: So top-level await is considered to be a major change, discussions we were having around dependencies randomly at top-level await, I am not sure my position reflects that. So does anybody have any thoughts or experience on what the system actually does? + +JWK: Many delegates believe that adding TLA is a breaking change. For example, in those cases, if a TLA comes in, the code will break. + +NRO Okay. Thank you. Then I can ā€“ dependency, starting to use top-level await without the dependency knowing are like a bit ā€“ they are not going to happen in practice. But yeah, I also find the point this can be done at test time to be somewhat compelling. + +JWK: Actually, I made this proposal because I have been hit by TLA that causes bugs. Initially, my example is for Chrome extensions. onInstall events must be called before the first event loop ends, and if you add it later, that event wouldnā€™t fire. After having TLA, because I already installed an extension, I donā€™t know that my onInstall event is already broken anymore. + +NRO: There are cases where TLA should have been allowed, but itā€™s not. Like, not service worker in any way, but non-service worker cases. + +JWK: Yes. There are non service worker use cases. + +CDA: SYG? + +SYG: MF brought up in Matrix and said I could bring in the queue. Given what DE said earlier about this feeling more like a dev time aid than a ship time thing, to the extent that you probably shouldnā€™t ship this at all, DE brought up the point in Matrix, is this something that the tools can just do? If itā€™s to aid in writing tests and catching bugs earlier, they could conceivably just do this check already, without there being a standardized directive. + +JWK: You mean a non-standardized directive? + +SYG: Yeah. + +JWK: Then, for example, how should we implement it, for example, in Node.JS? Do we need to touch the V8 to implement this? + +SYG: No. I think the ā€“ by tooling, I think usually when we talk about tooling, we mean the ahead of time tools that does transpilation, that does packing, bundling, that kind of thing. + +JWK: Okay. Here I wrote, a possible solution is to let the community develop their own linter and the bundler level bans. but each tool needs itā€™s own convention to do this. Adding it can be a portable way, also for the environments that does not have a bundle step. + +JWK: For example, letā€™s say weā€™re writing native ES Modules on browsers. You have some dependencies from CDN that usually cannot be linted by linters. If CDN sourced modules adding TLA, linter may not catch it. + +SYG: Yes. I am not saying that ā€“ Iā€™m sorry. I am not asking if doing it in tools ahead-of-time tools covers 100% of the use cases, but that is the main value add still there, if you do it via tools? If folks believe that the dev time thing rather than a ship time thing: yes, itā€™s true, if you are in a mode where you are like writing things and then loading them directly in the browser and youā€™re just like loading Modules from elsewhere, and you also want your dev time tools to be available at that time, then no. Then the other ā€“ the proposal of doing this via tools would not cover that case. But it is my impression that people do not develop that way. + +CDA: Okay. KM? + +KM: I guess on that same point, right, like if youā€™re ā€“ if someone changing the top-level await and youā€™re loading from a CDN breaks your site, wouldnā€™t that break your site for your customers independently of this feature? So like the feature not ā€“ making you I guess always break? Which maybe is good? + +KM: It might be fine to add a top-level await. But maybe not. And now they just sort of all break which feels almost worse. + +JWK: Hmmm yes, so my conterexample doesn't sound so impressive. And to Danielā€™s question, no, there is no specification at current time ā€“ oh. Is it a requirement for Stage 2? Sorry I forgot that. + +CDA: Yes. Initial spec text is ā€“ a requirement for Stage 2. I am justā€¦ + +CDA: Yes. Initial spec text. Place holders and to do are ā€“ but initial spec text ā€“ + +JWK: I want to ask if I write spec text, can I get Stage 2 today? Because if people thinking itā€™s not, I donā€™t want to spend time on writing spec. + +CDA: NRO? + +NRO I like this proposal, but given the opinions, itā€™s hard today. Like, even if nobody would express block this is not ready for Stage 2 + +CDA: MF + +MF: The combination of you will get an error anyway, and this can be done in tooling for better DX, is a compelling argument against doing this proposal. I wouldnā€™t want to discourage you from pursuing this and you may not need to change the proposal, but make a better argument in favor of it. + +JWK: I think the current one is the best I can think of. If this is not enough, maybe I should just try to open a PR on Webpack or other tools. + +CDA: Dan? + +DE: Well, I am kind of on the fence about the proposal. Mostly for the first of the reasons that Daniel Rosenwasser gave about how it sort of becomes a composability issue. It really re-enforces inserting TLA is in for a major change and for everybody depends on you, I am not sure if that has to be the case. Though, I understand that a lot of people believe it is. + +DE: What I wanted to disagree with is the view that if something is kind of a tooling concern, and itā€™s out of scope for what weā€™re doing here in committee, I think a huge amount of what TC39 has accomplished over the last ten years has been to provide standards that unify the tooling ecosystem. And I think itā€™s been very helpful that we have browsers and tooling aligned going towards the same API. + +DE: So I wouldnā€™t ā€“ I would like to continue the conversation that SYG started, how we want this relationship to work. But I wouldnā€™t want to just have an established thing of, oh, thatā€™s tooling concern. Weā€™re going to consider it not a TC39 thing. Because I think it would be harmful to our design process. + +CDA: SYG? + +SYG: Yeah. To clarify, the thing I started is not the conclusion for something that can be done in tools. The conclusion I donā€™t want people to draw from that, is that it shouldn't be done as part of the language as or part of TC39, specifically the conversation I started earlier is about things that ought to have direct execution support. That is an organized question in value to standardizing something, so there is that central coordination among tools and browsers. + +JWK: Okay. So I will write a spec and and come back next time. + +### Speaker's Summary of Key Points + +- Try to advance + +### Conclusion + +- Engines donā€™t believe it is useful enough to implement it in the browser, looks like it will be better in the toolings +- No spec so not qualified for stage 2. May come back after there is a spec. + +## bringing back Error.isError, for stage 1 or 2 (or even 2.7) + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/ljharb/proposal-is-error) +- (no slides) + +JHD: All right. So I originally presented this isError proposal looks like nine, eight years ago. Something like that. Essentially before ES6, 2015, I raised the issue that there's a lot of code on the web this depends on it at the time and still depends on object prototype `toString` being robust, this gives you the internal class slot of the object. Thatā€™s what it was called at the time. And the addition of `Symbol.toStringTag` breaks that. The committee was smaller and that group of folks' reaction to brand checking was that it was ā€œickyā€; that nobody should do that; we donā€™t like it. + +JHD: As a result, the creative alternative suggested was that, well, all the builtins have prototype methods or static methods that do brand checking in various ways - you can use one of those and get your brand check. + +JHD: And so I did. And I made a ton of predicates that I published that do the brand check and so I donā€™t have to reimplement that logic in a bunch of places, and that added lots of weight to the internet - but that was the path to getting that check. + +JHD: We have continued for all new things added, to include a way to do brand checking in that spirit. However, Error was overlooked. It turns out that that was the only builtin that didnā€™t have a way to brand check the instances. It has a slot, but nothing checks it. + +JHD: And so I made this `Error.isError` proposal. Itā€™s very simple - just like `Array.isArray` as youā€™d think, and it checks the argument. Just like Array.isArray, it pierces Proxies - I am not attached to that and prefer not to do that, but I was brand new to the committee and I was just swapping Array with Error. + +JHD: So the compromise I went with when this proposal was effectively rejected. I was going to pursue Error stacks. And it provides me the mechanism while providing a lot of value in standardizing `.stack`. + +JHD: However, that proposal stalled because I had focussed on standardizing the structure and schema, but not the contents themselves, and I got unexpected feedback that the proposal would not advance without also specifying the contents, and I had not had the time to boil the ocean and do that. + +JHD: And it was recently suggested by a number of folks, in the TG3 calls, that perhaps I should bring back this proposal because the committee may be less allergic to brand checking than nine years ago. What didnā€™t exist at the time was `structuredClone` - something that exists in the web and node, and weā€™re hoping to eventually pull the spec into TC39 - serializes error types and uses ā€“ I donā€™t believe the actual check uses brand checks. But I need to know if I have one of the real error types if I want to get the right behavior. And I canā€™t do that in a cross-realm way. The additional motivation still exists, which includes debugging and also RunKit, which is on every npm package page. You can click a link and try out the package in a repl. That serializes across realms and they at the time, and still, want to know for sure if this thing is a built in error or not, So they can serialize things in the right way and replicate the error object on the other side in the right way and so on. + +JHD: So I wanted to hopefully get, you know, the committeeā€™s thoughts on this, if you are amenable, given that I have already initial spec text my hope is (and that the proxy stuff would likely be the controversial part; I will just defer to whatever the committee wants on that) - my hope is I could get Stage 2 and bring this proposal back and move forward. + +CDA: NRO? + +NRO: So I know this is like not ā€“ is there a way to do this brand check? can you clone an object and then you once itā€™s cloned to have the error from the current trial so you clone it and do the prototype? + +JHD: So I havenā€™t explored it deeply enough to answer that question right now. My suspicion is that it doesnā€™t do that. And that on the incoming side, it takes something and it seems like an error, it treats it like one. But on the receiving side, I believe ā€“ this includes transferring to a worker and stuff like that. I believe that the clone will be a real error object, similar to the one on the other side, whether the input was real or not. + +KG: It does actually ā€“ `structuredClone` is a weird algorithm. But it checks the slot. And `structuredClone` doesnā€™t generally preserve prototypes. It preserves a specific list of brands that it knows about. And it checks for each of the brands in turn, and if this brand is present, then it serializes as "this is an Error" or "a Map" or whatever. And then on the other side it constructs that realmā€™s Map. Otherwise, thereā€™s no preservation of prototypes, only with these internal slots. + +JHD: Thank you. + +NRO: It gives you an object with an error prototype? Even though it has an internal slot? + +KG: I believe that is true. + +NRO: I am not against this proposal. But the only reason for this is to have a way to check if something is an error and it doesnā€™t matter how easy or difficult it is, like if we donā€™t care about ergonomics, we might have similar code, and given there were discussion on the clone, I donā€™t know what is the status of that. But we might consider just going to that. + +JHD: So if structuredCloning had no side effects, and if it was in the spec and normatively required, then I would agree, that would be an unergonomic and annoying, but sufficient, alternative. But it does have side effects. You canā€™t just throw something into it. And like it will ā€“ I believe it transfers buffers and things like that as well and it reads properties. And even if we move the spec into TC39 I believe the intention is to have it be normative optional, but I am not 100% sure. + +NRO: Okay. I retract my rejection. + +CDA: MM? + +MM: So the ā€“ first of all, I support this at Stage 1 because Stage 1 is sufficiently asmr amore us in solving the problem. I think you should solve the problem I donā€™t support at Stage 2 in the current form because of the specifics of the API, it violates the rule, itā€™s checking an internal slot with a non-`this` argument. + +MM: The stack ā€“ the stack test, from our proposal, which by the way I do want to see advance, I am interested to advance and find ways not to boil the ocean on it, but the important thing about that is that the inherent stack excess property is inherited from error.prototype. If this was an isError that was also inherited from error.prototype, and tested its this rather than its argument, obviously you could still turn that into a reliable brand check if you wanted by getting the inherited thing and replying it directly. + +MM: But the thing about doing it this way, doing it as inherited is the normal use case where you want to see is this ā€“ you know, where you are ā€“ youā€™re not ā€“ unlike is template object, the relevant question is no is with one of my errors, but is this an error? And if itā€™s just ā€“ so if you have a proxy for an error from another realm, whether itā€™s a direct recommendation realm or ShadowRealm, the dot stack works because the fetching of the property also goes through the membrane. Likewise, if itā€™s inherited, a .isError open () would work because the lookup again goes through the membrane. + +CDA: Do we lose ā€“ mark, I think you cut out. + +MM: No. I am done + +JHD: Yeah. I think itā€™s fine if itā€™s a prototype method. It could even be an accessor if we want, although I prefer to be a regular function. But mirroring array.array there is a precedent that starts with is, makes an argument which it beforehand checks. + +KG: I donā€™t see how this could work as a prototype method. Because the point is that someone has handed you an object you want to check if itā€™s an error. Calling the isError property from that object is not a way to do that. + +MM: So letā€™s think about this in terms of how it would have been built on stack. If we had stack in the language, the normal use of the stack is just if it seemed like an error and look up the stack and you get a stack, youā€™re happy. + +MM: We have already made a policy decision that we discussed , especially in the context of the set Intrinsics, that initial code can be replaced anyway. Thereā€™s insecurity against initial code. So the question; what is the purpose of asking the question? If the question is being asked casually, does this claim to be an error? Well, instanceof doesnā€™t work cross-realm. If youā€™re asking casually in a multi realm situation, the instance question is the right way to ask it. If you are trying to ask the question in such a way that youā€™re secure against being spoofed, then youā€™re all ā€“ then is a membrane between you and another realm a spoofing attack such that it should fail the test? + +MM: In which case we have a discussion on where we drew the line of membrane transparency. Which we have drawn the line, and KG respected it with regard to sets, is that you check internal slots on this, and you use behavior of arguments. + +KG: Right. But that only works because you expect that the way people are using the method is by calling the new method on the object. If this was always called with `error.prototype.isError.call`, that doesn't help. + +MM: It depends on what your threat model is for what the purpose for which you are asking the question. In the normal threat model that Google works with, that Chrome works with, youā€™re assuming that thereā€™s ā€“ you know, Chrome generally makes arguments from assuming that thereā€™s no untrusted neutral suspicion within a realm. And they often assume no evaluation in realms when youā€™re doing security checks ā€“ + +KG: You have misunderstood. I am not talking about a threat model. I am talking about practical membrane transparency. And what I have understood practical membrane transparency to mean is if someone is using an object in the normal way, then they will not notice if you put the proxy there. And what that means is that the language canā€™t look up slots on arguments because you are not in a position to put a membrane there. But the proposal adds error.prototype.isError and probably no one ever does `randomObject.isError()` because you can only call that method in a situation where you already know that the thing you have is probably an error. + +MM: Iā€™m sorry. Remember that we very? Mark in the language. There might not even is error method if you donā€™t know the object, you solve that by doing a question mark dot is error? + +CDA: I want to note we have limited time left and quite a long queue. + +MM: I will just yield saying, I am happy with this at Stage 1, but we need to settle this issuing before with Stage 2 + +JHD: Thatā€™s fine. The policy will checking slots on the arguments, thatā€™s not for static methods and I called that out during set methods this plenary + +MM: Well, that certainly not ā€“ that certainly does not satisfy my reasons for the policy. + +JHD: Yeah. Thatā€™s fine. We can discuss that before going for Stage 2. + +CDA: NRO? + +NRO: mark already said about casually asking. The second one is, if worried about true, you cannot call a random object because it might not existā€¦ + +CDA: MF? + +MF: I will try to make this fast since weā€™re running out of time. First, I would like to reaffirm my position from earlier that I do think brand checking is icky and you shouldn't do it. With that being said, I don't think it should be impossible to do. I just think that we should try to avoid encouraging it as much as possible, which means I support stage one for this proposal, but I do want to explore other ways that we can avoid having something so appealing as `isError` that people might want to reach for. I do hope that we move forward with the error stacks proposals in the future, and if that could be used as an alternative way to solve this problem, we could solve two problems at once, that would be great. I'll end it there because we're low on time. + +CDA: DE? + +DE: Could we go over use cases again? Because that could kind of help us understand this threat model question that MM asked + +JHD: My use cases are debugging. I want to know exactly what I have to I can figure out what the figure is. I have wasted many hours over my career discerning between things passing to API that are the [REL] thing I suspect versus thing that looks like it. + +JHD: So thatā€™s a big thing of mine. And then I mentioned the run kit use case. They are serializing a errors across realms for the the sandbox in the browser and they want ā€“ they are trying to display accurate information to the user so they can make a real error on the other side + +DE: Function overloading, use and react or something, you have a function that does different things based on the arguments, but for all 3 of these, how much do they need like really, really to have integrity? + +JHD: I mean, the ā€“ itā€™s a fair question that if something can pretend in all the observable ways itā€™s different than a thing, why treat it as a thing. The platforms treat them differently in terms of stack praise traces. If you throw a real error it has different ā€“ call type ā€“ call site display behavior in node than if you throw something that is not an error. + +JHD: If you throw an object that is pretending to be an error, it wouldnā€™t look as helpful and point to the line of code the way it will if you throw an actual error. There are ways to observe it for the human even if they are practical for the human. For the human, it does need to have integrity to have the right answer or figure out what needs to be fixed. + +DE: thanks. Thatā€™s helpful. + +CDA: SYG?. + +SYG: But that sounds like you care about a different thing whether thing is a refreshed error? Can you get stack traces + +JHD: I want to know what it is really. errors are just the one item that is built into the language that I canā€™t answer that question for. + +SYG: Why do you care about that? + +JHD: Because in order to understand the state of my promise and the purchase dense of runtime values and what mistakes programmers could have made me or anyone else who has touched the stack, the better ā€“ + +SYG: Iā€™m trying to chase this chain of whys into ā€“ until it bottoms out to something. You would have written one ā€“ taken one pass in your code versus another path. Chase to the end of where you would use the real erroneous to distinguish what to do + +JHD: If I know itā€™s a real range error [..] but if I donā€™t know what it is, looking at a stack trace and I have just described it, it could be something else. There are libraries that produce things that arenā€™t real errors but lack real errors. It happens in practice. Thereā€™s Chrome extensions that people use in the random browsers that do that. The output ends up in the logs and being able to check stuff helps. + +SYG: I donā€™t understand how the log thing would help if itā€™s a predicate that you have to evaluate comedy you would have to have a runtime ā€“ runtime get handle on this error thing to check if it is a real error. If itā€™s a tack trace, any user error that is then can be print ā€“ made to print just like RangeError. + +JHD: All of the JavaScript runtime tooling that intercepts for logging has the ability to introspect the value. It can check whether itā€™s a real error. Node has `util.inspect` - they use C++ to get this because itā€™s not possible in JavaScript. + +CDA: EAO? + +EAO: the concerns about potentially overlapping implementation with custom matchers, should proceed on the channel. I am not worried about the specific date, I donā€™t think stage 2 is appropriate, but Stage 1 would be fine. + +CDA: SYG? + +SYG: This is a separate thing. I am somewhat worried about my next two items. I am somewhat worried about the ease of misuse possibilities on the web in that if you look at how DOMException is ā€“ for those not familiar, DOMException is not currently not a real error. In the sense that it is not like created with all the internal slots of a ECMA262 error. Instead, thereā€™s some things ā€“ the HTML spec that says, DOMException and other things subclassing from DOMException must have 262 error.prototype on its prototype chain and if there are additional facilities granted to errors like stack traces, that DOMExceptions should get them as well. So the way this is hooked up currently is that itā€™s via prototype chains. From the user perspective, this seems fine. By adding a brand check, I am worried about the possibility of how easy it is to misuse if you are a JavaScript programmer on the web and you want to care if something is a real error, should that return true or if you also or the DOMException? Depending on what you want that answer to be, that may have implications on then does it mean the DOMExceptions must be like layout representationally literal subclasses of errors or done by host hooks and have implications that I want to think through as well? Like I I think we canā€™t just basically focus on is it a real error in the 262 sense. + +JHD: So I think the rubric I would try to think about to answer that question for myself would be, do DOMExceptions ā€“ for all the places where errors have special behavior, do DOMExceptions have any special behavior, whether that is structuredClone or sticking stuff into the DOM or sync or whatever, things like that. And if they do have similar special behavior, I might expect to return the ā€“ the predicate return true. If they donā€™t, if I can just make my own object that successfully pretends to be a DOMException they are touch, it doesnā€™t matter. But like I donā€™t think that is a requirement. If those other things I mentionedā€¦ + +SYG: In the sense that a mental module that users have, I want to care about exceptions that are thrown by host API ā€“ or platform APIs, whether that is 262 or HTML if we ship the predicate, that the host provides, then that adds a hazard misuse. I also want to call out that the way in which structuredClone cares about errors and gives special errors is immaterial, it cares because it must create the right thing on the other side. It doesnā€™t care about it for any behavior other than recreating the thing this it was given. So I think it is not that important, that the structuredClone itself has brand checks on errors. + +JHD: Okay. I donā€™t have a strong opinion about whether structuredClones should be brand checking or not. Kevin says, it is. But like I ā€“ thatā€™s fine either way. Yeah. + +CDA: There is a clarifying question from Mark. I will note we are less than 1 minutes left in today. Todayā€™s meeting. + +MM: All right. Yeah. Probably for Shu. Do DOMExceptions have stacks? + +SYG: I havenā€™t personally checked. On the dev tools console, the spec says they ought to. Thereā€™s a line that says additional in an implementation gives special power, you should expose those. + +JHD: That special power, I think is the same rubric that would apply. It sounds like I am going to ask for consensus for Stage 1. And I will unarchive and transfer the repo and have further discussions with SYG and MM and whoever has expressed concern before I come back for Stage 2. Is there consensus for Stage 1? + +WH: I support Stage 1. + +JHD: Thank you. WH. And MM is on the queue as well. Anyone else? Any objections? Okay. Thank you. + +CDA: All right. We are ending right on time, more or less. We will see everyone tomorrow. Thank you. + +### Speaker's Summary of Key Points + +Strong support for stage 1; before stage 2, champion needs to resolve concerns particularly about DOM exceptions categorization (SYG), as well as internal slot access (MM). + +### Conclusion + +Proposal has stage 1. diff --git a/meetings/2024-04/april-11.md b/meetings/2024-04/april-11.md new file mode 100644 index 00000000..35f1c63d --- /dev/null +++ b/meetings/2024-04/april-11.md @@ -0,0 +1,1072 @@ +# 11th April 2024 101st TC39 Meeting + +----- + +Delegates: re-use your existing abbreviations! If youā€™re a new delegate and donā€™t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|---------------------| +| Jesse Alama | JMN | Igalia | +| Daniel Minor | DLM | Mozilla | +| Waldemar Horwat | WH | Invited Expert | +| Ashley Claymore | ACE | Bloomberg | +| NicolĆ² Ribaudo | NRO | Igalia | +| Chris de Almeida | CDA | IBM | +| Duncan MacGregor | DMM | ServiceNow | +| Bradford Smith | BSH | Google | +| Jordan Harband | JHD | HeroDevs | +| Jirka MarÅ”Ć­k | JMK | Oracle | +| ZiJian | ZJL | Alibaba | +| Keith Miller | KM | Apple | +| Linus Groh | LGH | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Samina Husain | SHN | Ecma | +| Eemeli Aro | EAO | Mozilla | +| Ron Buckton | RBN | Microsoft | +| Daniel Rosenwasser | DRR | Microsoft | +| Aki Rose Braun | AKI | Ecma/Invited Expert | +| Mathieu Hofman | MAH | Agoric | +| Mark Miller | MM | Agoric | +| Dominic Gannaway | DGY | Vercel | +| Mikhail Barash | MBH | Univ. of Bergen | +| Istvan Sebestyen | IS | Ecma | + +## Decimal for stage 2 + +Presenter: Jesse Alama (JMN), Liu ZiJian (LIU) + +- [proposal](https://github.com/tc39/proposal-decimal) +- [slides](https://docs.google.com/presentation/d/1kXurIVl4kjzclwFgfzJqohtPyryeKI7Mn9C_w7aAYp8/) + +JMN: Great. Thank you. My name is Jesse. Iā€™m with Igalia. Iā€™m presenting an update today about decimal. Iā€™m working on this with Bloomberg. Iā€™ve got a co-presenter on the call today from Alibaba. Youā€™ll see zi Jan in just a moment. So just as a reminder, decimal, whatā€™s the idea of the decimal proposal? The idea is to add some kind of exact base 10 representation of numbers to JavaScript. Weā€™ve had some nice discussions like yesterday -- or, sorry, I guess the day before. Maybe it was Monday, about exactness and adding things and what not. And Iā€™d like to re-up that topic here with decimal. This is a somewhat more ambitious project than what we were talking about before with the `Math.sumExact` stuff. The goals for today is to present a bit of the ecosystem. There was some discussion last time I presented about products out there that are using decimals. And so one of the things that I would like to present today is to do a bit of a deep dive on some larger products that are using decimal numbers for their day-to-day work. And then I talk a little bit about the state of the open source world usage of decimals. I talk about the way that people use libraries and ways that people might use decimals, but currently donā€™t. Then Iā€™ll try to give you a breakdown of the state of the proposal as it stands today. I'll propose a solution to a topic that we have talked about a number of times here, the normalization issue. And if things look good, then weā€™ll ask for consensus to advance to Stage 2. + +JMN: So products that need decimal, decimal numbers. Usually when we talk about decimal numbers, weā€™re talking about human consumable quantities. Usually weā€™re thinking about money, although thatā€™s not the only use case. Bloomberg is quite active in the TC39 world. Youā€™ve seen that. Theyā€™re using decimal in their ecosystem. Ashley, are you there? Would you like to talk about this? Iā€™m also happy to give a bit of a discussion here, if youā€™d like. + +ACE: I can say briefly, yeah, thanks, Jesse. So, yes, at Bloomberg, people that use Bloomberg for any of their financial data would be very pleased to they that many of our systems here do use decimal to make sure we correctly capture the value of things in their financial world. And thatā€™s because a lot of code at Bloomberg is using C++ and Python, and a mixture of different databases, and all of these, due to the way weā€™re using them, have a built-in decimal type. And we can communicate between these things and we have a schema ā€”you can think of these schemas a little bit like a protobuf schema, but our own flavor of that ā€” where we can say which fields are decimals so we can create wrappers between all these languages and these systems. We also have a lot of JavaScript at Bloomberg, which is why weā€™re so involved in TC39, and the issue then comes is JavaScript doesnā€™t have a decimal type. And this then means that JavaScript becomes a kind weak cog in this system, when it wants to communicate with these other things or, wants to be the glue between these other services because it has a much harder time in trying to represent these in a way thatā€™s ergonomic with these other people, and especially as these decimal values can be deeply nested in the messages being pass around, which sometimes means we have to do the deep transformations of the data to find the decimals to then process them in a way we can flow them through JavaScript and process them back out. And so this adds noise to the code, makes the code harder to follow, and adds naturally like a performance overhead. An example of where decimals might appear is ā€” you might think, why do you need more than, like, one cent, and maybe surprising to people that donā€™t work in finance is that it can be common to talk about things in very, very accurate quantities. So when putting buy and sell request bids in, it might set a limit, and we have use cases where people want to set limits to, like, a 1000th of a dollar. And they may even want to go higher. They donā€™t want to precision to be locked into the API, and decimal catches that perfectly. It perfectly says in the schema that you can provide a decimal value, so youā€™re free to kind of go as precise as you want rather than us just fixing the smallest unit or just falling back to string and trying to document what that string must be. Yeah, feel free to add any more, Jesse. + +JMN: Yeah, thanks, that also sounds really good. And maybe one thing I might add is this case about very high precision. Iā€™ve also seen cases of six digits of precision for these things, and Iā€™m sure you also see that at Bloomberg as well, or even more. Thanks very much, ACE. + +JMN: We have a representative from Alibaba on this call. LIU, would you like to present something with what youā€™re up to with decimals? + +LIU: Yes. And thanks, Jesse. Hi, everyone. Iā€™m ZiJian, a delegate from Alibaba. Here Iā€™m going to show our use case of decimal. In Alibaba we have a software called DingTalk where itā€™s 20 million monthly users. + +LIU: In DingTalk, we have ding talk with 8 million monthly active uses DingTalk is compatible with Microsoft Excel and Google sheets, just like the picture shown right. And in DingTalk sheets, we have many decimal use case. The first is general mathematical calculations. And the second is sheets for formula calculation, which is most important feature about DingTalk sheets. And because DingTalk sheets is a web application, so we are facing many problems while using number calculations. The first is when we do simple calculation just like `0.1 + 0.2` and we expect to get `0.3`, but we get the wrong answer, and if we want to use some formula, just like if(), floor(), we are getting wrong answer too. So in current, we are follow -- for solve this question, we are follow the same thing that Excel does: just preserve 15 digits of precision. We just code JavaScript API of `toPrecision`. But even this still has many problems. Just like we have facing -- we are facing development mental overhead where itā€™s number to stream and the stream to number transform. Just like the code shown on the right. Even a simple calculation will become complex and hard to understand in real code and especially with nest sheet formulas. And with 15 digits of precision, it's still wrong with the given case. We cannot always get correct answers. And use decimals library will impact performance with many formula calculations. So, if decimal can go to the standard we can have many benefits. The first is we can reduce mental overhead on developers. No more conversation between the string and number and fast page rendering for the server and the client side rendering and reduce our JavaScript bundle size because we are no longer need to load a decimal library. + +JMN: Okay. Thank you very much. So now we have heard a couple of larger projects out there that are using decimal. You can see the motivation is quite strong. Any kind of nave treatment of decimal numbers can lead to user visible errors. So, decimal libraries are needed and this decision was made by a number of other products as well, surely. These are just a couple of examples. + +JMN: Letā€™s take a look at the open source world. There are a few packages out there for doing exact decimal numbers. What is interesting, they are all dominated by a single person. ['MikeMcl'](https://github.com/MikeMcl) has a bunch of packages out there. `decimal.js`. `decimal-light-js`. And `big.js`. `bignumber.js`. These are a few different variants of exact decimal numbers. For example, decimal JS, that has basically all of math sitting there. And it has kind of arbitrary precision, we also call it big decimal semantics. It is interesting there, that some parameters can be globally modified like precisions and routing we want to set. So calculations going to refer back to this global state. What is interesting there is a decimal light version of this which kicks out any kind of reputation of NaN or Infinity or -0. And it also just decimal, so thereā€™s no support for other kinds of basis there. + +JMN: `big.js`, thatā€™s used by quite a lot of packages out there. Very similar. `bignumber.js`, this is something that supports decimal and non-decimal arithmetic. ['MikeMcl'](https://github.com/MikeMcl) is really the hero here, you can see, you know, more than 10k packages that are using some form of decimal numbers there. + +JMN: Thereā€™s a number of users of these libraries. I tried to find some that stick out, obviously, when you do some kind of analysis of these packages youā€™re going to discover some kind of false-positives or the things that are one-off, throw away projects. So I tried to filter out many of these things. You can see, for instance, database adapters. This is one use case that We talked about a couple of times. So the idea is if youā€™re connecting to a database and that already supports decimal numbers natively, then we want to somehow respect that and big number JS is used by ā€“ and DB adapters to try to be faithful to the data in the database. Thereā€™s some stuff from Apple that is out there. SAP is using that, DataDog is using that. I see that the big query JS client is using big JS. And tensor is using that, there are graphing and charting out there, bar charts. Decimal JS is used by a bunch of things. ORM called Prisma, that is using that, using decimals behind-the-scenes. Keystone, there is a visualization library out there. This is a sample, just the tip of the iceberg of who is using all these libraries. + +JMN: What we find from all of this data is that the demand for decimals is widespread. So surely, we know, we can convince ourselves by intuition and our own experience, surely there are a lot of JS developers who just donā€™t know about decimal and kind of rolling with decimal calculations and either they donā€™t encounter any issues or they are fine with the issues. Maybe they just donā€™t know about it. But there is always a class of developers that do know about decimal numbers and are using libraries to get the job done there. Is something like 10k packages. As I said. + +JMN: We see this is used in lots of libraries and frameworks that need to integrate with lots of other code. So some examples I found are not particularly deep, but show there is a lot of connection between two system so decimal needs to stand as a faithful representative of values coming between two different systems. There is division between the implementations as you saw. The ones that I mentioned here, all by a single person. They are different. But thereā€™s not that much difference. So there seems to be kind of a core need to have some kind of representation of decimals. But thereā€™s also just a class of details about that, that donā€™t seem to be all that important. + +JMN: Some of these libraries support some kind of customization as I mentioned earlier, thereā€™s support for global state like the precision or some kind of rounding mode. But in our experience, what we have seen so far in the data is that this customization is not used that much. So it is kind of, it is typical use cases that the developer will either roll with the defaults and not even touch the global state or set it once and thatā€™s it for the rest of the application. There is no need to tweak any of those values later on. + +JMN: One of the things that I also wanted to point out is typical case that I have encounter which is where developers know about decimals, they are using a decimal library, either one of the ones that I talked about before or maybe they are rolled their own of some sort, because decimals not built in data type, at least there isn't some kind of native support for decimals, they still might be making some kind of mistakes which motivate the need for decimal these begin with. So for instance here, something from the `event-espresso-core` library. This is some kind of function for dealing with money. By the way, these things are links you can click there to go to the code if you want to take a look at that. I highlighted in red here the cases the programmers trying to create some kind of decimal. But the input there is perhaps not necessarily trustworthy. So here, for instance, we see `const share = new Decimal`, and then the input to that, the argument to the constructer is the result of some kind of math floor, there is a product there, something gets converted to a number. Two things get converted to a number, I should say. And so the issue here is that we enter the exact decimal world, thatā€™s the intention of the programmer, but with some kind of calculated, possibly the inexact value, probably a value that the programmer excepts. + +JMN: Here is an example from bar chart. Computing axis ticks. So working with some kind of graph. So here, again, exact details donā€™t really matter, but I highlighted in the red where I see a little bit of pain and feel a little bit of sadness when I see this. The programmer, as in the previous example, is starting from good intentions. Look at that, we got a 1, and a 10. This is great. So we got decimal numbers here. Those inputs are exact. Those are integers. But then we do a little bit of arithmetic here, we do a division. Maybe some exponentiation. And then, at the end, we multiply by some kind of tick increment and convert this whole thing to some kind of native JS number. So this seems mostly clean. We start in the decimal world, but we then end up converting took some number, which is, again, undermining a little bit of the point of using decimals in the first place. + +JMN: Hereā€™s an example from Shopify, `buy-button-js`, this some kind of little library that Shopify has kindly made available. Again, this probably doesnā€™t matter too much in all of its detail. Just look at the red part. Why weā€™re trying to convert something that is possibly inexact and roll with that. So we see that weā€™re, the data, weā€™re giving our JS numbers and strings. So the question is: Could this be improved if we had decimals? And I think the answer is yes. + +JMN: So looking at these examples, we see that thereā€™s a lot of cases where JS programmers are aware of decimal numbers. They have the tools to reach for them. But the computation from beginning to end and the representation from beginning to end is perhaps not faithful. So some of the motivation for using decimals is undermined. But, thankfully, at least some decimals, some developers are aware of these libraries. Thatā€™s a good thing. Thereā€™s a lot of really mission-critical data that needs to be represented as decimals, and thankfully, some programmers are doing that. + +JMN: We also see, if we look at those examples and many others not on the slide, basic arithmetic is sufficient for most use cases. We see front end and back end use cases exist. This is not just some side of browser issue, nor it is just kind of a backend issue with say a database adapter. We see that developers are trying to create decimals from JS numbers. Again, the intention is right. Itā€™s good that theyā€™re trying to do that. But in some cases that is just not going to be exactly what theyā€™re looking for. In the end, because there is no built-in support for decimals they usually have to convert these things to numbers at the end. So it's a bit of a sad situation, you might say. + +JMN: So thatā€™s it. So thatā€™s the motivating argument for this thing. We looked at some products that use these things. We have taken a look at some of the open source world and how they use these things. We have seen a lot of programmers aware of decimals, use them, but might still not be getting all of the traction that they want out of these things. So with that said, then letā€™s take a look at the proposal and see what we have to offer in its place. + +JMN: What we propose is the IEEE-754 standard, Decimal128. The data model here, we have a fixed bit width so any value is 128 bits. This is something thatā€™s been standardized since 2009, or 2008, something like that. Itā€™s been around for a while. This supports numbers with up to 34 significant digits. Which is really quite a lot. You know, even with applications that are using very high number of digits. Thatā€™s quite rare to see something that high. Which is why we think Decimal128 is a reasonable fit for the range of values out there. The exponent, so the power of 10, ranges from minus -6,143 all of the way to 6,144. Again, that is a very high number of exponents. So That's a vast range. That can handle really enormous values and extremely precise ones as well. + +JMN: We did consider a bunch of alternatives. Some of these have been presented before in plenary, but just to put this in one place. We did consider something like rational numbers. I think I speak for some of us here in the committee, I, myself is coming from the list and scheme world, these things have been around for quite a long time. There are plenty of valid use cases for these things. But what we find is that this really fast growth of the numerators and denominators given the lack of a common base. So even when we add two things together, the numerators and the denominators can grow, now, that is not necessarily the end of the world, because there is a way to reduce that, you can just use the greater common divisor approach, there are clever strategies for doing these things, but we are concerned about the growth of these things. And in any case, rational numbers are rather awkward match for the intended class of use cases. We really want to get something like a decimal representation, a digit string out of these values. This is going to require long division to get that out of a rational number. It can be done, of course, but it is just a bit of an awkward calculation to do. + +JMN: So by contrast, with decimal 128, comparisons are always taking place with a fixed common base. There is a fixed bit width. So there is potentially some kind of need for normalization which like weā€™re moving to zeros, but shat much more straightforward compared to say commuting the greatest common divisor or reducing a fraction so that thereā€™s no shared factors. Thatā€™s, we also see that decimal 128 is just a great match with the intended use cases. Certainly much better than rational numbers. + +JMN: Another alternative that we considered is "BigDecimal" or arbitrary-precision arithmetic. So unbounded digits. We decided against this in favor of decimal 128. We are concerned similar to rational numbers there is a rapid growth of the number of digits. Even simple digit calculations can lead to large growth. In this case, think about multiplication and division, those can generate quite a lot of extra digits. And some common operations even require some extra parameter to be specified to even make sense. Division in particular. + +So, one divided by three, if we think about that in the BigDecimal world, well, I mean we could, we had to specify how this thing is cut off or provide some kind of, I donā€™t know, some kind of parameter or ambient default then, the worry there, one divided by three doesnā€™t have a fixed meaning, but depends on some parameter being set in the background. By contrast, with decimal 128 there is always a maximum number of significant digits So all computations are back stopped. There is a maximum amount of significant digits that get generated regardless how complex the arithmetic gets. So one divided by three, for instance, just always works. That is a value. It has, itā€™s 0.333 and so on, 34 times. So in the worst-case we know that our competitions are backed up. Thereā€™s no ambient parameter there. It is in the data model. + +JMN: So, the proposal then is to have some kind of new library object with a constructor. Here are just a couple of examples of this. The idea is that we will accept exact inputs. So for instance, the first one is a simple decimal string. We can permit some kind of exponential notation if you want. Weā€™re even allowing some JS numbers to be given. If theyā€™re exact. BigInts are also continue to, because they are exact. But in the last example, give me `42.3`, this throws because the argument is just not exact. It is not an integer. + +So the idea is: You can enter the decimal world from exact inputs. We have a big skinny, lean API for arithmetic. We just have addition, subtraction, multiplication, division, reminder, and absolute value. There is some discussion about what else we could add there. Thatā€™s fine. Iā€™ll leave that for some discussion. But thatā€™s what weā€™re currently looking at. We donā€™t want to go much beyond this. + +JMN: So some of the math didnā€™t make the cut. If we look at the IEEE754 spec. We can see there is really quite a lot of mathematical operations that are specified there. If you look at the math objects, you can see those things there, too. But we found that basic arithmetic is sufficient for most use cases. There are some cases which are a bit of a gray area for us. For instance, square root. Itā€™s not currently in the present version of this spec. But in earlier versions of this in my own thinking and discussion with the champions, there was a time when we were thinking about adding it, but then we ended up cutting it. I donā€™t really have a good knockdown argument for including it. So ā€“ you know, maybe that could be included. But it is just not there. Things that are a bit more clear cut are sin, cos, tan, the other trigonometric functions: we found low usage of these in the wild. There are some use cases out there, but just not that much. We also did some survey I found that developers often didnā€™t really mention the needs for these things. Likewise, things like the logarithm function and exponentiation didnā€™t make the cut. The argument here is that itā€™s ā€“ unclear if there would be some added value given that theyā€™re already available in Math, and a Decimal128 variant would be just as inexact as the one available in math, but possibly with just more significant digits. Another thing that could be added and very simple to implement would be negation. Just change the sign of the argument. Could be done. Itā€™s currently not there. + +JMN: Rounding is also one of the things that we need to support. We are following the IEEE754 spec. And saying that the half even rounding mode is going to be the default. Open for discussion there. We follow this in the math object and itā€™s in the spec. It seems like it is a reasonable choice. Other ā€“ weā€™re open to arguments for something else. Thatā€™s fine. + +Rounding by the way is something that can be specified in all of the arithmetic operations. So if you start pushing the limits with your significant digits, then you might need to add some kind of rounding parameter. And all of those operations that you saw earlier also accepted a rounding parameter. I just omitted it from the examples just to keep them simple. + +JMN: We do support NaN and Infinity here. Again, these are also in the spec. Of course, if you like JS Number, then you also see NaN and infinity there. Thereā€™s a bit of discussion happens in the last couple of days about these things, actually in my some slides later you will see I have kind of qualified some of the statements Iā€™m making here, like A equals B. These kind of things. But the main point is that NaN and infinity do exist in the proposal. So, for instance here, you can see that weā€™re exposing some properties here, like whether decimal is a NaN or is finite. Here we are taking NaN minus infinity and some kind of finite, but extremely precise number. You can see that these are representatives is fine. + +JMN: Weā€™re also thinking that the `valueOf` is going to throw. And the intention here is to avoid any kind of intermingling of decimal values with any other arithmetic or JS. Thanks, by to way, to JWK if you are on the call, he made the suggestion to me back in Japan, I really appreciate this, this is very nice insight. + +JMN: And we propose also no operator overloading, but not cutting it off from adding it later if that ever reappears. There is really quite a lot of implementation complexities there. And also something going on in that realm, is that the proposal to do operator overloading was withdrawn recently. Weā€™re not trying to propose any literal syntax, that makes it perhaps misleading suggestion to a developer. They might be expecting operator overloading to exist if there are literals, but if doesnā€™t that is going to be very confusing. And there is no new primitive type. Nonetheless, I think we are not cutting ourselves off, because we could add it later. It would just be that Decimal128 objects would be wrappers for the new primitive if we were to do that. + +JMN: Thereā€™s one little technical point that I want to talk about. Very important one. Thereā€™s an issue that came up many types in plenary about normalization. Think about the main task here is to find some kind of a simpler form of a number. Or the simplest form of a number. So the main point is that in Decimal128, the official one, all digits are preserved, including trailing zeros. How should these be presented and available to a JS programmer? There were a couple of approaches that weā€™ve considered here. One way is the "always normalize" approach. Where decimal 128 values are always normalized when they are created, compared, serialized, and the mental model here is actually quite nice. We are thinking of the decimal values as mathematical values. So for instance, if Iā€™m given 1.20 as an input, that is just 1.2, so the final zero is just removed and canā€™t be recovered. Another approach, quite valid, is to keep all input data. So in particular, that input imply that we would not delete the trailing zeros. By the way, I just want to note that this normalization issue also applies to other parts of the number too. So, we had some discussion about the number 300, for instance that can be represented in a couple of different ways. Normalization applies to that as well. But this is the main issue. + +JMN: These are two entirely valid approaches to how to deal with the normalization issue. And what do we do? I think I have a solution here. And the proposal is to alwaysā€¦ so look at these three decimals here. `1`, `0.3`, and `0.7`. Well, these could be false, perhaps, if you take a very strict view of what equals means, but then ā€“ true is numbers. And surely, this is just the tip of the iceberg, there are probably all sorts of ones if we donā€™t normalize. + +The other proposal of never normalizing, unless necessary or at least unless explicitly asked for, would log things like this, so coming from the `Intl` side of things. Think about this, we take, letā€™s say we want to work in the German language, and our input is a decimal of -42.00. Currently what we do, we would say if we format that thing, we get `-42`, but actually it should be `-42.00`. So information is getting lost and canā€™t be recovered at least in any direct sense. So the thinking here is that trailing zeros should be part of the number. They belong to the data model. + +JMN: The solution that where would like to propose to these two apparently irreconcilable approaches, is to what I call "normalize by default". So when constructing a Decimal128 value, all digits would be preserved. Arithmetic operations respect all digits of all arguments, but normalization does happen in two ways. If you try to serialize a decimal you get a string out of it. That will strip the trailing zeros by default, but they can be preserved if you know what youā€™re doing and use the right argument. Less than and equals are going to compare decimals by mathematical value. So even if the trailing zeros had been implicitly removed. But optionally, they can be taken into account. So the idea, let me just rephrase that, is that decimals for JS programmers can be like mathematical values out of the box. But if you know what youā€™re doing, you can get the digit strings. So what would that look like? If we were to take say `0.8` and `0.2` and add them up, say give me the `toString`, well thatā€™s `"1"``1`. Good, thatā€™s right. It is one as a mathematical value. But here if I want to turn off normalization, I can recover that extra piece of information and get `1.0`. + +There are a couple more examples here. So for instance, equals and less than should also compare by mathematical value. By the way, thereā€™s some last-minute changes that are being highlighted here in blue. That reflect some discussions happening on GitHub about this issue, but the point is that ā€“ even if the underlying data has trailing zeros in it or maybe zeros on the other side of the number, like in the 3E two example, then normalization is just going to apply. We always compare by mathematical value. + +This is just an example of say computing a bill. Starting to run a little bit short on time. I would prefer to get to the discussion. So I leave it to you to take a look at this one. This is a very common example. + +JMN: Let me just sum up here. The semantic asks the API weā€™re going to propose working with IEEE-754 Decimal128. There will be a constructor that takes strings and BigInts and integer numbers. There is no syntax, just basic arithmetic. Two string is going to have an option to omit a decimal string or some kind of exponential notation. Object will through, there is integration with number format. And weā€™re going to normalize by default. Thatā€™s the proposal here. Thatā€™s the diff from the previous presentation toand this presentation really. + +JMN: So the conclusion is that we have a spec text thatā€™s available, number format integration is being worked out, not quite done. There are some other parts that might be touched like plural rules. Thereā€™s an NPM library out there for you to take a look at. Test262 tests are missing, but thatā€™s something that we envision doing. It should be fairly straightforward to take the current tests in the NPM library and convert it to the test262 format. Great, thatā€™s all I have and Iā€™m happy to take a look at the queue. Iā€™m not looking at the queue, so I canā€™t manage anything. + +WH: Okay, I have a clarifying question about DingTalk. What number operations on decimals does DingTalk support? + +LIU: Iā€™m going on to answer these questions. DingTalk pause basic mathematical operations such asked addition, subtraction, multiplication and division, as well as arbitrary calculations and predefined functions. + +WH: Can you round, and do exponentiation? + +LIU: Exponentiation? Yes. + +WH: Okay, next I have a walk-through of the current state of Decimal. Iā€™ve been doing a lot of work on this and finding lots of issues and having extensive chats about this. And Iā€™d just like to quickly go through the items. + +WH: The presentation implied that calculations using Numbers were wrong. In fact, thatā€™s not necessarily the case. Some of the ones that I could see in the presentation were intentionally using Numbers because Decimal did not provide the right features, and had those calculations been done using Decimal, they would have been equally wrong. + +WH: There are a few slightly different versions of the IEEE 754 Decimal standard, and the one I have seems to be different from the one that the champions have. + +JMN: I was just looking at the 2019 version. + +WH: So Iā€™ve been doing extensive reviews on this. The proposal submitted to this meeting differs quite a bit from what was in the presentation. Letā€™s go through some of these. The proposal does not include any conversions to and from Numbers. There is a conversion to BigInts, but not from BigInts. Itā€™s unspecified what it actually does, and, depending on how it is specified, it may be very hard to use correctly. + +WH: The proposal refers to the IEEE spec for a lot of the operations. The problem is that the IEEE spec does not actually specify what those operations do. As a result, nothing defines what common operations like converting to and from strings are doing. I would much prefer the operations to be specified the way that we do in the rest of the spec, such as for Numbers on which we actually provide algorithms for addition, multiplication and such, done on real numbers and then rounded. In fact, the algorithms here would be identical to what we do for Numbers other than the rounding at the last step, which would round to decimals instead of rounding to binary floating-point numbers. + +WH: Thereā€™s a number of issues with improper operations and mathematical values in the spec where it tries to create indeterminate forms such as subtracting mathematical +āˆž from mathematical +āˆž, which doesnā€™t work. + +WH: There are bugs in NaN handling. + +WH: The spec does not include negation of a Decimal number. I hope we include negation because most people will get it wrong. Itā€™s not well-known that, if you donā€™t have negation, the correct way to negate an IEEE Decimal number is to multiply it by -1. If you instead subtract it from zero, youā€™ll get the wrong answer and youā€™ll also mess up your precision. + +WH: The remainder operation in the spec uses IEEE 754 remainder semantics, which is not what anybody other than really advanced numeric experts would expect. If you compute the IEEE 754 remainder of 42 divided by 10, youā€™ll get the expected remainder of 2. However, if you compute the IEEE 754 remainder of 46 divided by 10, youā€™ll get a very surprising remainder of -4. I doubt most of our users would expect that. We should define remainder the way everybody else does ā€” same as what we do for Number `%` using the same algorithm as for Numbers. + +WH: The spec relies on words which are not actually defined in the IEEE 754 such as ā€œexponentā€ and ā€œsignificandā€, which makes it impossible to understand much of the spec. IEEE 754 has no unique definition of an ā€œexponentā€ or ā€œsignificandā€ of a given IEEE 754 number. You can get the exact same bit pattern and, depending on where in the IEEE spec you look, its ā€œexponentā€ can mean this or that and its significand can be an integer or a fraction. To avoid confusion we should define those mathematically and not refer to the IEEE spec. + +WH: Thereā€™s no square root and no exponential, both of which will cause problems. To get those, users will try to convert Decimals to and from Numbers, but we provide no way to do that. There are no conversions in this spec to and from Numbers. + +WH: Other things that weā€™ve discussed on GitHub are that equality and relational comparisons shouldnā€™t take a rounding mode parameter because it makes no sense for a comparison to round. IEEE 754 comparisons never round. + +WH: There is confusion regarding the totalOrder IEEE operation, which is so obscure that we donā€™t provide it for Numbers, and I donā€™t know if anybody ever noticed. Itā€™s a very obscure operation which is very difficult to use correctly unless youā€™re a numeric expert. I think we should cut it from the initial version of the spec. + +WH: I also think that we might want to cut operations such as exponent and significand if theyā€™re read-only, so you can get the significand of a Decimal value, but you cannot construct a Decimal value with a given significand. + +WH: Okay, so Iā€™ve been helping with a lot of these things. I think all of these things are resolvable, but they will take a lot of extra thought and discussion, and we should have those discussions. I would like to invite anybody interested to participate in those discussions on GitHub. + +CDA: Great. We are -- thank you, WH. We are less than 10 minutes left of time. Considerable number of items on the queue. I was on the queue just about the three different versions of IEE754 you were referring to, and we can take that offline and coalesce around what versions we are looking at. + +NRO: (from queue) I clearly agree we should define all operations and toString behavior given the requirement for Stage 2 is to have a draft spec, this probably doesnā€™t need to be a blocker + +SYG: Iā€™m not sure if WH already covered this, Iā€™m not sure, but please correct me if Iā€™m wrong, there was a thing in the slides about checking for exactitude into the decimal constructor. How do you implement that? + +WH: Iā€™ve looked at the spec, and the current answer you canā€™t convert Numbers to Decimals. It only allows strings. + +SYG: I see, but the slides show something different. + +WH: The slides ā€” yeah, the slides donā€™t match what the spec is doing. + +SYG: Okay, if the intention is to allow numeric -- like to allow numbers, the only way I know to implement an exactness double check is tract the literal, track its span in the source text so you reparse the string when you need to check if the double representation is exact. That is a no-go for an implementation technique, so at least narrowly, that particular feature needs to be rethought. + +WH: I agree. + +DE: No one is positing the complicated thing that SYG mentioned. Weā€™re not going to make numbers represent other things. A basic option is only safe integers. + +JMN: Just to confirm, thatā€™s the intention of that. + +DMM: I was just curious, we seem to be building a difference in the usefulness of round in decimal, which when accept rounding modes compared to things like to precision, which are -- a number which do not accept a rounding mode, and that is part of the reason you get the problems of doing to precision and getting the wrong answer, because you rounded the wrong way or you canā€™t specify the way youā€™re rounding those numbers. I was wondering, clearly it doesnā€™t belong as part of this proposal, but are we going to try and enhance things like `toPrecision` to have equivalent capabilities? + +JMN: Do you mean despite being able to specify the number of fractional digits or being able to specify, say -- + +DMM: Being able to specify the rounding mode used and that sort of thing. At the moment, I think `toPrecision` doesnā€™t allow you to say half even or ceiling or floor or any of the options that you might want. + +JMN: Yeah, I think thatā€™s correct. But as far as I know, thereā€™s no intention to add that here. I think theyā€™re just using the half even rounding mode by default, if I understand correctly. + +DMM: On round, I think thatā€™s not explicitly said. On the decimal side, Iā€™m not sure if -- I canā€™t remember if `toPrecision` explicitly says what rounding mode it says. I think itā€™s effectively there in the algorithm it specify. + +DE: I donā€™t think it would be a good idea to add those because numbers just cannot express the result. Thatā€™s, like, why weā€™re introducing decimal. So the answer to developers will be just never use `toFixed` or `toPrecision` or anything. + +DMM: Okay. The other queue item I had was a question on do we think that unnormalized numbers are enough to deal with getting the right number of decimal places? It clearly works as long as you have enough precision to have that number of decimal places after the point, but if you deal with large enough numbers, you will hit cases where you cannot denormalize the number -- you cannot get the number into the right state to have the toString do the thing you want. Do we think the number format is enough to handle that in some way, or are we -- is there a plan to enhance it to allow specification of the required number of decimal places or things like that? + +JMN: Yeah, can you -- I guess maybe we could take this offline, but Iā€™m curious to hear about the values you have in mind that canā€™t be represented or where one somehow gets into trouble. + +DMM: Obviously you have to get quite large. Iā€™ve dealt with bug reports before on the Ruby side and things like that and also on the Java side when specifying number formats, but what should happen if the amount of precision specified in the number format, for example, is more than the number of significant digits in your number. And this is often that people want to format a number, even a really large one, as an integer plus two digits after the decimal point. But obviously if you get a large enough number or a large enough number of digits before the -- the significant digits before the decimal point, you cannot convert that number into any form that unnormalized has those digits after the decimal point. So itā€™s edge cases, but itā€™s one that people do eventually complain about. + +JMN: Do you imagine a kind of unsatisfiable constraint problem where I say give me, I donā€™t know, 10 fractional digits, but then the integer is so huge. + +DMM: Exactly. If Iā€™ve got 25 digits before the decimal point, then I can never satisfy that because I havenā€™t got enough significant digits in my number format. + +JMN: I see. I think Iā€™m not sure I have a clever response to this at the moment, but this is well noted. Great issue. Letā€™s continue with the queue. + +EAO: Iā€™m a bit worried how little of the integration with Intl.NumberFormat and Intl.PluralRules seems to be defined so far. And in the presentation in particular, I was left uncertain about what happens when formatting the -42.00 example. Is that going to format as `-42` or `-42,00`? + +JMN: Well, the goal is to get the `"-42.00"` of course, but I mean, it sounds like you are unconvinced that the current spec text would do that. Is that right? + +EAO: Thereā€™s a PR Iā€™ve not really looked thoroughly at, but the current spec text doesnā€™t seem to very clearly say how to achieve that. We would presumably need to modify how the minimumFractionDigits option works, and so on. And it seems like leaving out the syntax, the integration point of Decimal with the existing JavaScript spec is mostly through what is happening in Intl.NumberFormat. So I would hope that this is well specified, and Iā€™m a bit worried that it isnā€™t. + +CDA: Okay. I see some other items on the queue. Some have disappeared. But we are at time, and we are -- we have quite an overflow for this -- the rest of the day. So if folks could please follow up on whatever youā€™re interested in, either in Matrix or in GitHub, and continue working that out. And, yes, so, yes, we are past time. Jesse, would you like to ask for Stage 2. + +JMN: I would like to ask if there would be consensus for Stage 2. I understand that there are a number of issues here with the spec text, and I really appreciate all of those who have contributed to these discussions on GitHub. In my view, a lot of these things are not fundamental blockers. These represent cases that, I think, could very well be worked out. But Iā€™m happy to hear any counterarguments. + +JHD: Just to avoid speaking at length, Iā€™d love to talk more about my topic (primitives), but I do not think that itā€™s ready for Stage 2. + +MM: Given limited time, I think Iā€™ll just let stand that Iā€™m not willing to advance to Stage 2. + +DE: When blocking, usually thereā€™s a reason accompanied with it. Can you explain a reason. + +MM: Yes. I think that thereā€™s too many open questions and all together, I understand that many of these questions can be resolved during Stage 2, but it should advance to Stage 2 only if -- only if I believe that we should -- that itā€™s -- that when more likely than not, letā€™s say, Iā€™m putting my own words on this, to -- that these issues can be worked out to the point where itā€™s worth advancing, and at this point, I donā€™t see the advantage of putting it into spec versus leaving it as libraries, and I see a lot of advantage to leaving it as libraries. + +DE: Okay, maybe you could elaborate on that advantage offline. + +MM: Sure. + +CDA: Okay, we are well past time. But WH, did you want to be brief. + +WH: The current proposal has not met the criteria for Stage 2, regardless of what we think about its merits: the semantics of the core features do not exist yet. + +CDA: Okay. Thank you, JMN + +### Speaker's Summary of Key Points + +- Decimalā€™s motivation and concrete API were presented in detail. The current version of decimal is based on objects with methods, rather than primitives and literals, and uses the IEEE 754 decimal128 data model. +- WH raised number of technical concerns about the data model and the quality of the specification, while agreeing with the fundamental approach. + +### Conclusion + +- Decimal stays at stage 1 +- WH will work with proposal champions offline to address concerns. + +## Shared Structs Discussion + +Presenter: Shu-yu Guo (SYG) + +- [proposal](https://github.com/tc39/proposal-structs) +- [slides](https://docs.google.com/presentation/d/1a53adMbL_Uqb1KxnY-r6Ie6n1PQXvNw3JgUOvRYFido/) + +SYG: Great. So this is not asking for stage advancement, this a prelude to asking for stage advancement, where Iā€™d lay out the feature set that we are -- we want to ask for stage advancement for this yearā€™s structs and the un-shared structs proposal. It has been paired down little bit from some previous presentations. So start with -- Iā€™m going to start with the unshared struck. These are just single threaded normal instructs. There was a bit of an open question for a period where we were considering is it useful to include unshared structs as part of the proposal. The current thinking is that it is because it -- the kind of -- the restriction the instructions that come with structs we believe are independently useful for expressing certain programs, and you can get some performance benefits and some layout memory benefits out of them. So unshared structs are objects that are declared to have fixed layout. So they have -- they are closed objects instead of open. You canā€™t add properties on to them outside of what the -- outside of the properties that are declared. They cannot have own properties. They are basically sealed after construction. They have a transitively immutable prototype slot, so the prototype itself is considered part of the layout, basically, and the -- in prototype chain is immutable and fixed as well. Thatā€™s not to say that the prototype object itself is deeply immutable. The prototype object itself need not be immutable. Itā€™s just what the prototype object in that slot is immutable, because thatā€™s considered part of the layout. These thing have one shot initialization, which means that unlike, say, class where if during the initialization of an owned property, the right-hand side, the initializer for that property throws, you can get a half initialized object. These structs are one shot initialized in the beginning, which means that all the declared properties are initialized to undefined before user code even gets access to the receiver. So after all the properties are added, then the instance is sealed. And this restriction basically means that super class must also be structs. If your super classes can break this invariant, then we donā€™t have this invariant. And because of that, thereā€™s really no construction per se, but really itā€™s post-construction initialization. So because you get these sealed instances, the constructor call, which we might want to rename something else, but for the sake of being concrete here, the constructor only does post-construction initialization of the fields. One good consequence of that, in my opinion, is that there is no return override trick. Youā€™re not constructing the thing. Youā€™re just given an already initialized instance. If the initializer returns something else, well, thatā€™s just ignored. Itā€™s not a constructor, so you canā€™t do return override. One new thing that is -- that Iā€™m presenting here that has not been discussed in previous presentations is Iā€™m -- weā€™re also proposing that struct methods be not generic on the receiver. This is basically you can think of it as an extension of the, in my opinion, correct design chose we made on `Set` methods where the built-in `Set` methods are non-generic on the receiver, but are -- but can take set likes in arguments, so struct methods, if they have methods, those methods would throw on incompatible receivers. You can imagine that they have some check in the beginning that says theyā€™re this value is not actually one of my instances, if itā€™s a generic object, that would just throw. This has nice benefits downstream in code-gen in that you can assume everywhere in the code that the receiver is always up an expected layout and it omits a lot of checks, basically. So thatā€™s the feature set for unshared structs, which we think is useful in itself. Even without any kind of shared memory sharing + +SYG: So as a quick example, you can have this `struct Box {...}`, maybe you should be named in it, but for the sake of the example, I just left it as constructor. You can do things in the constructor on fields that are already defined. So it can do the stat X equals X. Say thereā€™s a method. Oh, sorry, keeping with the -- with the example, you can make a new box, you can assign X to it. Itā€™s declared. You canā€™t assign a thing thatā€™s not declared. That would throw, because itā€™s sealed. You canā€™t change the `__proto__`, because thatā€™s sealed, and if you want to call this ris method on a generic object literal, that is not a rizler, it will not work. It will throw. So thatā€™s shared struct -- sorry, thatā€™s unshared struct. + +SYG: Moving on to shared structs, these have the same instructions as unshared structs, plus they can be shared across agents. The data fields are shared. And thereā€™s a pretty deep restriction that shared things could only reference primitives or other shared things. So shared structs cannot point to unshared objects. They can only point to primitive strings, numbers, or other shared objects. This is so that basically the one for implementation ease and also one -- and also, two, for kind of segregating the heap in the program from -- into a shared part and an unshared part. Because everything on the web and on JS today is unshared, there is -- it is not possible to retrofit them to be shared and is also a bad idea to retrofit them to be shared. So anything that is shared has to be opt-in. So this is opt-in at a pretty deep level. You canā€™t have arbitrary shared to unshared edges in your object graph. That will open all world of hurt. We have to figure out what it means for shared things to point to unshared things and stuff. Itā€™s not a thing that is generally possible or easy do. So the restriction here is that shared things can only point to other shared things. They can either have a null prototype, so they donā€™t have a prototype at all, or a realm local prototype. I will go into the details of what the realm local here means. This was presented before, so it will be a recap for folks which were here. If thereā€™s an no prototype, obviously thereā€™s no place to put the constructor property, and they donā€™t have one. And because functions are not shared things, JS functions are deeply unshared things, you can only have function class elements or struct elements like getter setters and methods if you have a realm local prototype. Because in the case of a realm local prototype, the prototype is a realm local object, which is itself an unshared object. So what are realm local prototypes? So the point here is that we want to enable attaching behavior to -- oh, this a typo. This actually means realm local instead of thread local. It was an open question for a while whether we should go with thread local, meaning agent local, or should we go with realm-local. Iā€™ve decided on realm local because thatā€™s the much more natural thing in JavaScript. + +The motivation here remains the same, which is that because we canā€™t share functions, we canā€™t just put JS functions into shared struct. But that proves to be -- that proved to be a big DX hurdle in early prototyping, the partner feedback has been if I canā€™t attach functions, that means we have to manipulate these objects with free functions, which really harm the adoption, especially the incremental way. If you reason have an existing code base where you have a corner you think can benefit from multithreading. But if converting that corner to use shared structs means that all sources of one or two classes that you want to convert if all users have to change the use sites to be free functions instead of methods, that is just, like this giant refactoring thing that is pretty difficult to accomplish. + +SYG: So how do we enable attaching behaviors that can be called like methods on shared structs. The proposal puts forth realm local prototypes, which is basically what it sounds like, the prototype object is a realm local thing. So I think the easiest way to think about how this works is basically if you squint, this is how primitive prototypes already work, so we have a number, if have a number primitive, thatā€™s not an object, yet has methods on it you can call. How does that work? Every time you treat it like an object and you want to call a method on it, you magically get this realm local prototype. You get the number prototype from the current executing realm. Analogously for shared structs, if you want to call a method, it looks up the prototype for that shared struct definition inside the current realm. You have a per realm copy of these prototypes. But because these are realm local and unshared, they donā€™t have the restriction of the shared stuff where they cannot point to unshared things. You can put arbitrary objects and arbitrary functions into them. + +SYG: So it is still an open question for how do we signal that a shared struct ought to have a realm-local prototype. The choices here are either by default or some kind of hand wavy opt-in syntax to avoid any distractions, this example just leaves it as some hand wavy hand in. There is no concrete syntax proposed. But the choices are either by default, if they think that is the behavior you always want or you have some opt in. And the idea here is that if you have a realm local prototype, you can make this shared struct, you assign stuff into the fields, and you can get the prototype then and you can just add stuff to it. I realize this goes -- this contradicts what I said about earlier, on what I said about the transitive immutability of the prototype chain, that the super classes must all be structs. If we have a hand wavy -- if we have a realm local opt-in mechanism, this may need to be further worked out on does the realm local prototype must have a fixed lay stout and how that is provided. That could be straightforwardly provide by an extends clause here, for example, so you know that right-hand side of the super class is itself a struct that has a fixed layout and memory local and has a fixed playout. Leaving that contradiction aside, apologies, these slides were taken from an earlier presentation. What slide 6 is designed to illustrate is just the realm localness, is that this object prototype of P here does not get a shared object, but gets an unshared object that you can then put functions onto + +SYG: This is in the main thread. If I communicate that thing to another thread, another worker in this case, when itā€™s communicated to the other worker, which has its own realm, the programmer has not yet set up the per realm prototype for this struct, this shared struct -- sorry, this shared point struct definition. So initially, this function doesnā€™t exist in this other realm. So you have to set it up yourself, after which you call it, you get your own function. That is the per realm prototype mechanism that we intend to use to solve the how to attach behavior issue. + +SYG: That points to a second problem, which is also for DX reasons. Of what weā€™re calling "the correlation problem". So if you look at this example, what this example shows is that you have to have some code in there where you have to set up the functions that -- the methods that you want to have available on the prototype per realm. You set it up here once in the main thread and you set it up again in the worker. Most of the time, itā€™s -- if not if not basically all the time, you want these methods to be the same across all your threads. If you are programming multithreaded stuff natively, youā€™re not setting up, like, per thread or per realm specific implementations of your types. You have one implementation that works across all threads. The reason why weā€™re doing per realm is because functions are deeply unshareable. They close over the global object in which they were created, they have the local function prototype, thereā€™s a lot of just -- a lot of language reasons that they are deeply unshareable. So if we believe the use case almost all the time is that you want to have the exact same methods on all of the prototypes, how do we facilitate that to a make it a little bit easier to work with? The mechanism that weā€™re proposing here is something weā€™re causing, like, auto-correlation, and weā€™ve called this the registry in the past, but that word is kind of overloaded and has some implied meanings. So to work an example, say we have this shared point shared struct declaration. Thereā€™s some hand wavy prototype mechanism. And thereā€™s a single -- there is a module that has a single, you know, instance of this, instance is not the right word. Thereā€™s a single occurrence of the shared point, shared struct declaration. I import this in worker A, and I import the same thing from worker B. The problem is that because this was declared to have a realm local prototype, it has two different prototypes. And they donā€™t necessarily play nice in that you have to reset up the prototype in each worker. So, wait, why did I include this next slide? Ignore that slide. + +SYG: So the correlation problem is that we want to correlate the multiple imports from the same textual occurrence of the shared struct declaration somehow. Want to correlate them such that we can use the same TLS key for the realm local prototype, the VM can deduplicate shapes, but this is really a transparent optimization. And that we match the intuitive mental model that developers are already have of that types kind of work everywhere the same everywhere across their threads. The design constraints that ideally thereā€™s no new global communication channel. If you want to correlate something across workers, that sound like you need to do some communicating across threads, and thereā€™s been concerns from folks like MM and MAH that this is a global communication channel. I would like to discuss that in this item today. Iā€™m trying to understand is this really a global communication channel. + +And hopefully it has better developer experience than manual correlation. Manual correlation meaning that the programmer has to copy/paste some initialization code in each worker that sets up the realm local prototypes for all their shared structs. + +SYG: And the performance thing, like I said, is a transparent optimization. Itā€™s a nice to have, but not the thing that weā€™re solving for here. So the proposal if we have also a not -- if we also have a hand wavy correlation mechanism that says I want this shared struct declaration to be auto correlated across all evaluations of this textual occurrence, meaning that there is some map under the hood that is keyed by the text location of, like, inside struct.JS, of the script location of this piece of syntax, then the idea is that you can have this one file that declares your type, that sets up your realm local prototype. When you import it from -- in one worker, because itā€™s autocorrelated and the same TLS key is used for the per realm prototype, it sets up the prototype in worker A and then when you import it in worker B, it sets up the prototype in worker B, and then things just kind of work as you expect. Worker Bā€™s foo is still going to be work different worker Aā€™s Foo because itā€™s two different evaluations, but their shared point declaration are going to evaluate to two different function objects that point to some same thing under the hood. So, again, ignore this next slide + +SYG: So the semantics of this autocorrelation mechanism is there needs to be some agent cluster wide registry that does this deduplication. What is the key of this registry, this map? The current proposal is that itā€™s keyed off of source location and on a registry miss, either first evaluation of a registry shared struct, this is inserted into the registry. On a registry hit, ie subsequent evaluations in different works or of the same source location, shared struct declaration, it checks if the shape matches exactly, and it deduplicates it if it matches, and otherwise, it does something else, whether it silently does nothing, maybe the console can issue a warning or it throws. And the thing I want to understand from the folks who care a lot about communication channels here is this an implicit global communication channel. My inclination is no for these reasons, but I would love to hear from the folks on the other side of this. My inclination is 'no' because the key, which is a source location, is not a forgeable thing. If the registry hits do nothing on layout mismatch instead of throwing, then it is unobservable. If the registry hits throw on a layout mismatch, I suppose you could then use this to leak information across realms and threads, but to exploit it to leak information, it requires modifying and triggering re-evaluations of modules or scripts, because the key is the source location. And where my thinking leads me is if you can trigger re-evaluation, you can also directly observe other things, so widely commits via this mechanism. So thatā€™s where I am with -- my inclination is in is not actually a communication channel, but I might be missing something. Itā€™s worth calling out here that this mechanism requires some bundler opt-in as well, because you can -- because of the key is the source location, this basically means that if you want to use this correlation mechanism, the bundlers cannot copy/paste this shared struct textual occurrence of this shared struct declaration, they cannot duplicate it, because that means they will have different keys and it will be an observation -- it will be an observably -- it will not be an observably equivalent to change to copy/paste the textural declaration. Because of this, the bundler guidance if there are shared structs declarations that use the auto correlation mechanism, they cannot be duplicated. They have to be aware that these things have to remain a single instance of a textually. In my instance, this is not a deal breaker, with you the bundlers have to be aware of this. + +SYG: Thatā€™s a deep dive into two aspects of the shared structs of the semantics of the shared structs. Outside of shared struct and unshared structings, we are bro proposal shared fixed arace. These are exactly what the sound like. These are arrays that can be shared, but they are fixed lengths, unlike normal arrays, which can grow and shrink as needed. And like shared structs, the elements can only be primitives or other shared objects. They can have a realm local prototype, some other small details there on this slide. Weā€™re not proposing an unshared fixed length array because there doesnā€™t seem to be any reason to ever use that. Itā€™s just getting more restrictions for no real gain. So quick example, in the interest of time, since it to get to discussion, folks can read it on their own time and Iā€™ll skip going through the example. + +SYG: Weā€™re also proposing high level synchronization mechanisms. The argument here on why weā€™re doing this in the same proposal is that we recognize the difficulty that shared memory comes with to program correctly. And if we give people access to shared things, and we do not give people an easy to reach for way to synchronize those accesses and to have critical sections to provide mutual exclusion to those accesses, that is a big foot gun that is not a good idea. So it would be good to have these in the same proposal. Itā€™s really a package deal. And the Mutex is a Mutex. Itā€™s a pretty simple, non-recursive single user mutual exclusion mechanism. There would be would have a lock, try-lock and unlock methods. Previous iterations of the proposal had a callback taking lock method that calls the callback under lock. But now with the explicit resource management, this was changed to more, I guess, traditional, letā€™s say, lock and unlock methods so that they can be used as part of using. + +SYG: Thereā€™s a condition variable that is basically the condition variable that you expect. I donā€™t think thereā€™s any surprises here. Like atomic stop wait, this condition variable also cannot be used on the main thread. Oh, sorry, this also cannot be used on the main thread. You cannot block the main thread. You can try lock on the main thread, but you cannot lock on the main thread. And you cannot wait and block the main thread with a condition variable either. Async version may be added late, but that is clearly out of scope for this proposal, which is already quite large. + +SYG: The memory model here is the default access to shared things are unordered. Atomic -- the atom -- sorry, the atomics methods on the atomics name space object are extended to take these shared structs and shared fixed length arrays, in addition to TypedArrays to give sequential consistency support for folks writing lock free code that need sequential consistency. There a requirement that allocation is publication, meaning that once you allocate a shared struct or a shared -- any shared object, that is considered a publication for the purposes of other threads. While the access to the fields themselves are unordered, we canā€™t -- the minimum bar here is that the VM can never crash because of data races here. And to guarantee that VMs never crash, the one strategy JVM, for example, has used in the past is "allocation is publication". When you allocate a shared thing, that is considered also a publication with the -- from the perspective of other threads. Synchronization primitives are all sequentially consistent. So mutext condition, variation, theyā€™re all sequentially consistent. + +SYG: One thing that is new from last time is that after chatting with the -- with MM and folks, that this is a big power user feature that has a lot of consequences if you opt in. We donā€™t want you to opt-in accidentally. On the web, you cannot opt in accidentally because thereā€™s this thing called cross origin isolation. This server must send special headers for shared memory to be available on your web page. This is already true today for SharedArrayBuffers. If you do not enable these right headers and have cross origin isolation enabled you can not communicate SharedArrayBuffers or anything else that has shared memory across thread boundaries. So thereā€™s already this, like giant opt-in gate on the web, but that is on the host, and that is not on the JS side. And it may be worth adding some kind of opt-out thing or maybe opt-in, Iā€™m not sure how that would work yet, but some kind of way to disable it or disable by default and enable in non-web contexts. Itā€™s just something like a one-way disablements switch, so you can load some run first code that says, you know, I donā€™t want to use shared memory, and I donā€™t want any of my dependencies to accidentally use shared memory. You can do something like that. I think that is fine. + +SYG: And this is kind of tied at hip to the WasmGC, the shared WasmGC proposal, which was the focus of my previous presentation at the last meeting. This cannot move any faster than the shared WasmGC proposal. The shared WasmGC level proposal was -- was lower and this will be aligned in semantics to the WasmGC proposal, and that is it for the feature set that we plan to ask for Stage 2 for at some point. Probably in June, but maybe one after. And with that, I will open to the queue. + +JHD: Hi. So my queue topics has been discussed a little bit in Matrix and so I might be confused. I originally filed my queue topic on your realm local slide. + +SYG: Which -- tell me which slide. + +JHD: Keep going back. I think it -- itā€™s the one where you talk about the hand wavy stuff. + +SYG: This one? + +JHD: Yeah. + +SYG: Okay. + +JHD: So in order to have methods be the same across contexts, whatever that is, you need the re-evaluate the method definition code in each context, correct? + +SYG: Uh-huh, yeah. + +JHD: And you want to do that automatically, but the only way to do that is to have it syntactically associated with the struct, is that right? + +SYG: Not necessarily. Okay, syntactically associated with the struct, meaning inside the braces here? + +JHD: I mean, thereā€™s other weird things I could come up with off the top of my head where the doesnā€™t have to be inside the braces. But for the sake of discussion here sure, if you use class syntax method directly inside the brace, that would make sense to me in the way that it is canonically and syntactically associated with the struct, it would be re-evaluated in each context and I get it. + +SYG: So the re-evaluation is nothing special. The special thing is where does the re-evaluated function get assigned to. Thatā€™s what the auto-correlation does. But the re-evaluation is just a plain re-evaluation. Like, this gets evaluated as a function in, you know, the executing realm, like every other function declaration or expression. + +JHD: Right. + +SYG: The point is, like, where does it go? Now you have there thing that is not shared, if you want put it on a shared prototype, how do you do that? So the proposal here is that, well, we say the prototype itself is actually not shared, and thatā€™s how that happens. + +JHD: Right. So you just make a new prototype object the hold all the methods in each context? + +SYG: Yeah, exactly. + +JHD: So that part makes sense to me. But Iā€™m looking at this slide. Is this the proposal way you would define the where method? + +SYG: No. This is -- I think this is my bad. I had copied these slides without looking -- without reviewing them in depth when I made this slide deck. And I had missed this. I think this is also the same question as Mathieuā€™s question, are prototypes less strict than shared structs prototypes and itā€™s not my -- that theyā€™re less shared than unshared struct. Because I said this, that all struct have fixed lay substitute, including transitively mutable prototype slides, this code, this code should not work. So that -- + +JHD: Did I miss a slide where you showed the version of this code that should work. + +SYG No, I didnā€™t have it. I donā€™t actually have the. Like, the way that this would work, where this can work is that if I extend some base class B and B, for example. + +JHD: Is also ā€“ + +SYG: Has where field defined, like, as long as you define the layout ahead of time, then this would work. As it is, you are adding a new undeclared property onto the prototype, which violates this invariant I said earlier that all structs must be fixed layout. + +JHD: Okay, would it work to, after the why, letā€™s say, where the shared point is a base class, if you just do where parentheses and curly braces works, as if it were a class? Is there any reason that that wouldnā€™t work? + +SYG: Yes, that should also work, if itā€™s run local, yeah, that would work. + +RBN: Thatā€™s part of the proposal, correct. + +JHD: Okay, so then the resolution of the hand waviness is that youā€™ll define methods in syntactic way, not necessarily, this but there would work, and the function themselves as well as the prototype object that houses them will be, you know, different in each context, which may be realm or however you want to describe it. Is that accurate? + +SYG: Yeah, itā€™s realm. Yeah. + +JHD: Okay, thank you. + +MAH: Yeah, I had asked about the strictness, because some of the examples had dynamically modifying the prototypes, so that seems to be counter to what unshared structs were, but it looks like you want to make sure that isnā€™t the case, and find a way through that. + +SYG: Yeah. + +MAH: So moving on to my other question. Before I answer whether source location is -- or before we discuss whether source location is an appropriate this [..] communication channel, I want to understand what you mean by source location. In particular, one example is `eval`. If you eval some string that declares a shared struct, what is the source location for that? And is that somehow -- can it ever be made to match the -- the evalā€™d or non-evalā€™d location in another realm? + +SYG: No, each invocation of eval would have a distinct location from any other implication of eval. Even if itā€™s on the same literal. + +MAH: Okay. So it sounds like source location is strictly a -- strictly works when the module or script source has been directly loaded by the host somehow? + +SYG: Right. Our -- the common use pattern we expect is something like this. You have your shared structs in a structs.js and then everyone imports from that. + +MAH: So at first sight, it would seem that your source location is indeed an unforgeable key, and as such, is a valid way to key a global registry without providing a communication channel, ie, that is what I understand. I would double-check with Mark. Which I havenā€™t had a opportunity to do. However, maybe for future discussion, I am wondering if this source location, like -- I would like at some point the discuss, like, how it may be -- itā€™s weird departure where now youā€™re saying some features of a language are only available through source that was evaluated in the very specific way. And makes a difference whether the source was dynamically evaluated or was somehow virtualized. It seems to fully present to -- potentially fully prevent some type of virtualization of source coding. + +SYG: That is the -- I think that is a good characterization, and if I understand correctly, the virtualization goal would be at tension with the communication channel goal, is that, like, this basically comes down to can the language not expose something programmatically so that itā€™s not a communication channel. But if it does not expose something programmatically, then you canā€™t virtualize it. Like, that seems to be in direct tension to me. + +MAH: Yeah. And the thing is there are some intents to use virtualization-type mechanisms for bundlers, for -- for hosts or -- I mean, in general, virtualization as a host virtualizing another host, so that would mean all of a sudden that, for example, you might not be able to simulate a browser in a node environment anymore, if you did something like this. + +SYG: But itā€™s different -- for node, itā€™s different, though, because they can do whatever they want at the C++ level. I mean, an engine can provide a more expressive API to its embedders that kind of pierced the language guarantees thing, so long as the net result is compliant. Thatā€™s a different -- this key could be exposed via C++, and we have to -- and we can say, you know, the node makes a promise that it doesnā€™t expose that to user code, but it can use it internally. Thatā€™s -- I think thatā€™s a different concern than, like, can you ship a piece of JavaScript code that perfectly virtualizes a host. + +MAH: Yeah. I mean, in effect youā€™re saying that you can only symptom late, virtualize another host if you have native hosts in your own engine? + +SYG: Right. The complications arise when we expose -- when we make host hooks -- sorry, I donā€™t want to have too much of a tangent on the virtualization of host hooks, but basically, right, the security -- like, we treat host hooks specially, theyā€™re distinguished things. If we expose them to yourself code, thatā€™s like a whole different can of worms and we need to think through that very carefully. + +MAH: Yeah., yeah, to recap, I think if you use source location, it seems for sources that arenā€™t elevated or resolved really by the host, it would seem to be -- it would seem to be an unforgeable key and thus, prevent a communication channel. However, that seems to be in conflict with JavaScript virtualization of host behavior regarding evaluation of source. + +SYG: Yes. And I should -- I would like more collaboration here to resolve the tension, because I think you hold both goals, while I donā€™t really hold either goal. + +MAH: Yes, I mean -- + +SYG: You as the folks who care about communication channels. + +MAH: Yes, I cannot speak much, much more deeply toward virtualization goals. Thanks. + +SYG: Cool. + +KM: I guess Iā€™m more fused with the virtualization goal. Maybe Iā€™m missing something. Maybe thereā€™s something Iā€™m missing, but wouldnā€™t be the same problem apply with symbols, that, like, if you tried to execute the symbol in multiple contexts, you wouldnā€™t get the same symbol again? Or maybe Iā€™m just misunderstanding something. + +MAH: I think I do not understand the question. A symbol is the same as a -- is effectively the same as an opaque object that you just create. So itā€™s a unique identity that you create. + +KM: Right. But, like, in this context, like, you eval a symbol in two different contexts, right, like, I guess what your point is that, like, if they werenā€™t eval -- like, each time you would evaluate this source text, youā€™re getting a different, like, source location, so, like -- + +MAH: Right. Youā€™re getting a different ā€“ + +KM: Youā€™re evaluating the symbol twice. Itā€™s not semantically the same to evaluate it twice? + +MAH: Thereā€™s a difference teen the -- thereā€™s a difference here that the result of the evaluation, which is creating effectively -- itā€™s creating a new instance. Itā€™s creating a new type definition, and thatā€™s fine, but if the only mechanism to attach behavior to an, air quote, known type is ours, a location, that means there is no eval or JavaScript controllable way of attaching -- of attaching behavior to a known type. + +KM: But wouldnā€™t you just do something like, if when youā€™re trying to virtualize, before you execute their code, you, like, run it once in an eval, and then you forward that on to the, like, user, the virtualized code? Not saying itā€™s nice, Iā€™m just saying it seems possible, right? + +MAH: You canā€™t, because as SYG pointed out, the eval in two different realms are -- or workers would never yield, even if theyā€™re done at the same lifetime of the program, they would never yield the same -- they would never yield the same source location. + +KM: Right. This is a -- this is probably different from symbols, because symbols you canā€™t post message. + +MAH: Even if you could post message them, it doesnā€™t -- I mean, it would depend on the semantics of post message you pose anyway. + +SYG: In the interest of time, I would like to get to WH's question. + +IID: Just quick clarification, like, if for some reason we do this eval thing and we donā€™ting in to correlate, are there any user visible distinctions, or is it purely a performance issue because the shapes donā€™t necessarily align and you get megamorphism and so on? + +SYG: There is that, but there is also a user visible distinction in that you -- for -- that you will have two different keys for your -- if you have a realm local prototype, and you do the eval thing, you will have two different keys. You will need to manually set up those realms if you expect them to have the same methods on the prototype. You will have uncorrelated prototype objects. + +IID: If you send it to another realm, then it wonā€™t have any methods, okay, understood. + +SYG: Yeah. WH? + +WH: Iā€™m curious what the consequences of unordered accesses of these fields are. When this arises in shared arrays, you might just get some bad bytes, but here we have typed values, which presumably can contain numbers, strings, and whatnot. You can get a bad string? + +SYG: So the core -- the weakest guarantee is that there is no tearing. So that even if theyā€™re unordered, you can get -- so these things are basically all pointers under the hood, modulo stuff that may be NaN boxed. So because the minimum requirement is no tearing, you can not get a bad string. You can get a point or a different string perhaps, due to race, but you will get some string if it was a string. You will get a pointer to some string. Youā€™re not going to get a half written pointer, for instance, so youā€™re not going to get a thing that will crash on the reference. Basically, this requires the implementation to have all these field accesses be pointer aligned. + +WH: Okay. Imagine an implementation which modifies strings in-place as an optimization. Could that implementation no longer do that optimization if anybody sticks strings into shared structs? + +SYG: That is correct. This has been a very -- this was one of the most difficult things to prototype in V8, was the string subsystem. Because strings in every -- JS VM is extremely optimized and representationally optimized with things like in-place mutation, even though they are at the language level immutable, optimizations that do in-place mutation of strings are obviously not thread safe and we had to do different strategies to recover some of those optimizations in a thread safe environment. And we have done that but the naive thing where you overwrite the current string, that is not safe. + +WH: Yeah. Iā€™d be curious about hearing more about experiences with that at some point. + +SYG: Sure. + +USA: Before we move on with the queue, thereā€™s three items in the queue and two minutes to -- in this item. But, yeah, moving on next, there is MM. + +MM: The -- since time is limited, Iā€™m going to focus on one issue, which is that thereā€™s a -- an inconsistency between two goals here that I think can be resolved by throwing out prototype methods. You acknowledged that this whole thing is, you know, shared memory multithreading, personally cooperative shared memory multithreading where you depend on separate locking operation that may or may not be used correctly, this is all a huge foot gun, and the answer to the huge foot gun is that weā€™re trying to not provide it by default, but on the web have some kind of opt-in, outside the web, have either on opt-in or opt out. And I appreciate all of that. The -- because this thing is, if it were widely used, would be a disaster for the ecosystem, the -- the desire to keep it narrow in sort of an expert-only feature for the experts to use parallelism to provide thread safe extractions to others that are not subject to the dangers of parallelism, the usability goals that were stated for motivating the prototype methods just conflict with that. If anybody takes code that was written not for shared memory multithreading and thinks that they can correctly port it to shared memory multithreading by just changing a few things, theyā€™re wrong and we should not encourage that illusion. And if we completely omit the whole notion of shared prototype methods, then this proposal really narrows down to essentially exposing just shared structs as structured heap allocated type data or, you know, eventually typed data that is the thing thatā€™s motivated by whatā€™s forcing our hand, which is WasmGC. + +SYG: MM, I recognize where youā€™re coming from. I think this is a difference in line drawing on where we draw the line of how much are we encouraging. The ergonomic feedback that prompted us to add methods -- so the initial proposal did not include methods for these reasons. And because they were hard to do. But this was added after feedback from power users, mind you, right, from folks like Ron Buckton, who is a compiler author, he works on TypeScript compiler, these are power users that found it difficult to adopt otherwise i weigh that more given that if the power users say they canā€™t adopt this, then that is pretty strong feedback. + +MM: I accept that that is feedback that needs to be engaged in. I would like to re-examine that and enter into those conversations, because I certainly have to opposite -- okay. + +USA: Great. So thereā€™s two more items in the queue, but weā€™re on time. SYG, what do you think we should do? + +SYG: If folks donā€™t mind, because this was, like, 50 minutes instead of 60 minutes, if folks donā€™t mind a five-minute extension for the two items, I would appreciate that. + +MM: Iā€™m fine with that. + +USA: Okay. All right, then. Letā€™s do a five-minute extension. Dan, youā€™re next. + +DE: Yeah. Iā€™m very happy with the state of this proposal. I remember arguing a few years ago with SYG about how we have to have methods and we should do it in some way that has to do with modules. My proposal was too complicated and hard to use, but Iā€™m really happy with where this proposal is landing, both in terms of the ergonomics like this, as well as in maintaining correspondence with WasmGC and maintaining capabilities and make sure we have a common memory model. The -- one thing that we might consider is adding possibly in a follow-on proposal a concurrent map, RBN has been saying records and tuples should be changed to provide the basis for this, and I donā€™t know, I donā€™t actually understand how that could work, and maybe this will make more sense as a built-in construct. Yeah. Keep up the good work. Looking forward to Stage 2 in the near future. + +SYG: Thanks. And concurrent collections is on my mind as future work for sure. + +USA: Next up, thereā€™s KM. + +KM: All right. So is it -- I guess is it just ergonomics issue where you donā€™t want to do a handshake on the main thread, you initialize all these shared structs and post message, I donā€™t know, the constructor prototype or empty one to, like, all your workers and then, like, get the constructor and prototype on each of those workers and initialize it that way? + +SYG: Exactly. That was the initial thing we led with, and tried, and then Ron actually has prototyping experience here, that that proved to be not just a giant ergonomic hazard. The thing that convinced me that this was worth it is it means if you donā€™t have this autocorrelation thing and you require a handshake, that you add another sterilization point on every thread spin up. So you have to do this handshake where you have to, like, basically barrier all the threads. You have all the times -- you receive -- in the beginning we called them exemplar constructors. You receive all your exemplars and set them up. After all their threads go into the bare yes, youā€™re like, okay, initialization phase done. Means you canā€™t just spin up threads as you want and have them to a more fine gained handshake, which is what the hand stakeshake does under the hood. It does this on spin up which could hurt loading performance, and thatā€™s what convinced me in the end. + +RBN: If I can also add to that, thereā€™s also this issue when it comes to a handshake process of being able to tribute it across multiple threads that just makes it very -- much more complicated problem. It was a problem that we were considering, and it means thereā€™s an opportunity for users or for poorly written code or code that isnā€™t -- that is intending to do something improper with the runtime try to get in and sneak in ahead of certain things depending on how careful a person has been about constructing their set-up for the -- for shared struct to do this type of handshake process, which means they could introduce a mapping which is not the one you would want, but potentially wreak havoc with the system. The approach we took was basically an evolution of, well, should we do this handshake process, how can we simplify the handshake process to make it more ergonomic and can we remove the process and. Takes away the complexities that someone could mistakenly do the wrong thing and makes it a built-in capability. + +KM: Okay. Makes sense. + +USA: That was the entire queue. + +SYG: Great. Thank you very much. I think this was a helpful discussion. Yeah, thanks a lot. + +USA: Yeah. Thank you, everyone. And thanks, SYG. Letā€™s see each other at the top of the hour. Have a great lunch. + +### Speaker's Summary of Key Points + +- This is the JS side of shared memory in WasmGC +- Mark Miller reluctantly recognizes that shared memory is coming to WasmGC; wants a opt-in/out mechanism +- Using source locations for shared struct auto-correlation mechanism isn't a communication channel but also hard to virtualize +- Owing to power user feedback, methods were deemed necessary for the adoption of the MVP + +### Conclusion + +- Champions will engage Mark Miller more to recap how they arrived at adding methods +- Feature set enumerated in the slides +- Stage 2 planned next meeting + +## Strict Enforcement of 'using' + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/rbuckton/proposal-using-enforcement) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkqpmyWWWHf6TPYt2gg?e=fPIwaL&nav=eyJzSWQiOjI1NiwiY0lkIjoyMTg5ODA3Nzk5fQ) + +RBN: Okay so in a previous planning session there was a suggestion about using declarations and there was not a strict enforcement of `using` in some cases where you might not have something that produces a disposable but you canā€™t guarantee that itā€™s being initialized to a `using` declaration or `DisposableStack` or `AsyncDisposableStack`. So I was asked to look into this a little bit more and I want to present some findings and the potential of where this should be a feature that we want ā€“ so, if this should be its own proposal. + +RBN: So, just to, again, kind of stress the motivations here. Right now the `using` declarations do not enforce disposalā€“ or are not enforced by disposals. Thereā€™s no way to say that the disposal that I returned from a resource is taken or reused. However any type of [ ] has the chance to drastically help with expansion, especially since there are APIs in Node.js or the web platform that can be potentially upgraded to support disposable or `Symbol.dispose()`, but do not need to necessarily have that level of complexityā€”or, to have that level of complexity, would require bifurcating those APIs into separate APIs that handle those cases. A Node.js promise handle would have to have another way of getting those file handles from `fs/promises` than what currently used to as a mechanism because you donā€™t want to break existing callers. So, one option to this is an opt-in mechanism strict at the API producer level to limit adoption blockers from existing APIs that would not be able to use that without bifurcation but also give resource users to the ability to require enforcement. So, the solution that weā€™re discussing is that an API could return an object instead of being the resource or having, an `Symbol.dispose` method instead as a `Symbol.enter` method that is invoked by ā€“ sorry, one moment. That is invoked by a `using` declaration and `await using` declaration, async disposable stack to unwrap and get the actual resource. The reason we consider to be this an optional type API is a nonstrict API to support this would have to add a `Symbol.enter` method to support this. We donā€™t believe this is a require. It that needs to be passed down to everyone so since that is what you do to not be strict so weā€™re posing this as an optional behavior that would implicitly do this, theport of using the disposable stack there for itā€™s not intended to be a separate mechanism to do initial acquisition itā€™s primarily designed to be an indicator that I want this used via `using` and we theoretically created the resource and the application will try to throw because they try to access a property or method on the result that is consistent with the expected API of a resource and instead has to go through an extra step to get to it, if they want ā€“ if they want to ā€“ need to explicitly access the APIs directly, building blocks for disposable resources. Itā€™s not intended to delay resource acquisition, just resource enforcement. + +RBN: ā€œwhy opt-in?ā€ - Thereā€™s numerous host APIs in the DOMs and other platforms that can be readily made into disposals. Node JS has dispose and theyā€™ve already added this to `fs/promises` may have this to use with Babel or TypeScript. Just to have this type of enforcement, if this were a mandatory thing that was part of the actual disposable protocol, and thatā€™d be problematic because it adds extra overhead, it adds complex additional documentation. And it doesnā€™t help existing callers that want to do upgrades because you canā€™t do a simple refactoring to change from a `try`/`finally` to a `using` declaration ā€“ you have to know with what is the correlated API that this has been changed to so itā€™s not helpful for adopters to start adopting because itā€™s slow to adoption. Some of the reason thereā€™s been interest in having this strict mechanism is native resources that people can potentially drop on the ground instead of use `using` instead. + +RBN: And they already use some type of GC finalizers on the API size to clean up native `fs/promise` file handles. If you drop it on the floor, it will be essentially cleaned up GC and the handle will be released. Most user-defined responsibilities will also be holding on to memory managed objects that are not going to require strict enforcement to deal with releasing native resources. + +RBN: If we do decide to take this method, the actual spec text is fairly small. in addition to the `Symbol.enter` method, we would have this additional change to create disposable resource that says, get an `enter` method if it exists and itā€™s callable, then we will call it to get the actual resource and if that resource isnā€™t an object, we throw ā€“ again, just to verify the same check that we did on, step 1.b.1. + +RBN: Now,Iā€™m somewhat concerned about adding this proposal. I donā€™t necessarily think itā€™s a requirement or necessary, as there are potential work-arounds that can be used today and there are somewhat consistent with the other ways software handle this, and the other is to make `Symbol.dispose` into a getter. So, the final registrar case, you could have a final registration that sends a message to the Console if they dropped on the ground and didnā€™t use a disposable stack or using declaration by registering the finalization call for the resource when itā€™s allocated and unregistering it from the registry when itā€™s disposed and then you can be notified after the exact, but itā€™s something you would catch during testing or development. The other way that we could work around this and not have a `Symbol.enter` method is to make it a symbol get. Many resources will likely follow a pattern where accessing the resource when itā€™s disposed should throw an exception. You would normally through if the resource is disposed, so what we could do is we can track that, when the resource is produced, itā€™s in unregistered state because nobodyā€™s tried to access the symbol dot method and accessing the symbol dot get, is an explicit action and youā€™re calling it yourself or passing it to a using declaration or disposing stack will get the disposed value, um, as soon as its received the resource and then have that fall back to your normal disposal behavior would be, and if you could throw if itā€™s never registered so you can catch those errors early. And again, one thing that weā€™re explicitly not proposing is context managers, they have a mechanism that is asynchronous that allows you to do additional work as you access the resource and we combined them, and the or think they have that is very complicated is their exit mechanism, which is similar to dispose, but it also allowed to use to things like intercept exceptions and swallow exceptions that from code that you might not expect. So it might be completely unrelated into the code that youā€™re using that using a resource with, and we find that to be a spooky action that we donā€™t think is a good thing to leverage. So weā€™re not really trying to propose that erroneous behavior to JavaScript weā€™re looking at `Symbol.enter` or whatever we decide to name it as purely a mechanism of enforcement. + +RBN: And finally a, how this might affect the management proposal. I think this should be optional on behalf of the API producer and because this is opt-in and `Symbol.enter` is virtualized, if it doesnā€™t exist, we acts as if it did and we believe that itā€™s not necessary for it to be the key part of the initial roll-out for this feature. The path of least resistance is implement disposables to be able to migrate and using and get those semantics and we donā€™t necessarily believe that opt-in strict enforcement is a blocker for strict ā€“ explicit resource management to eventually reach stage 4. I would prefer to pursue this as a add on proposal if we decide to go forward with it. + +RBN: I still donā€™t believe itā€™s extremely necessary because we have work-arounds to catch those ā€“ to catch ā€“ misuse of disposable resources. So, I donā€™t strongly believe that this should necessarily hold back resource management. And if we do, maybe we consider this as an add on proposal as specific to that proposal. Iā€™m not sure we even need this, so Iā€™d like to go through the queue and see what folkā€™s responses are on this proposal, if we do that. + +MM: So you made a point about how this mechanism is not intended to express delayed acquisition, which it the way you put that seems to imply that it was a virtue that it was not, ā€“ that one should not use it to express delayed acquisition. Obviously one can. Why should one not do so? + +RBN: There exceptions around delayed acquisition that this cannot maintain in resources. Resource acquisition is initialization. That means acquiring the resource is the acquisition, the thing that youā€™re acquiring is the resource. All of the things you should have done to access it should have happened and if Iā€™m awaiting the operation, the initialization requires the await keyword, if initiation allows you to do delayed initialization, there might be the exception that I can make it async for an async resource and I do not believe thatā€™s a reason we should pursue as it breaks the exceptions for the RAII mechanism. And I also believe that the purpose of this is purely to act as an enforcement mechanism. We could choose another enforcement mechanism that is offer necessary, thereā€™s no run time mechanism that I could consider would not be arbitrarily complex and would not work with disposable dot use or custom building blocks that you want to use so the only mechanism that seemed to fit the requirement of this was something that you interact with the object itself, therefore having something that says, Iā€™m opting into getting this value is, kind of the best approach to that and I donā€™t think itā€™s a good idea to extend that to do some type of additional work. It could be that the `Symbol.enter` or whatever we name it is actually a property on the object that you get that youā€™re not expecting it to run any code at all. I donā€™t really have a strong position on that, if we implement it that way, but the idea here is we donā€™t want to delay acquisition because we donā€™t want to create the wrong idea about whatā€™s happening. + +MM: So, zooming out, my overall reaction is Iā€™m very, very positive on the problem youā€™re solving. I was initially very positive even on the mechanism youā€™re proposing to solve it, until you got to the slides on getter, and I think itā€™s great, itā€™s sufficiently great that I donā€™t see ā€“ I think thatā€™s, thatā€™s a fine way to get the functionality that this is proposing within the existing proposal, to the point that, um, I no longer see the need for a new mechanism. + +RBN: That is generally my position as well. + +MM: Okay. So, this serves the, this means that the existing proposal, because you have the option to do the get, solves the problem that youā€™re stating here. + +RBN: That is correct. My main reason for bringing this as a proposal was to propose alternative that have been proposed on the issue tracker and they sufficiently cover the use cases but itā€™s important to discuss the alternatives to discuss if itā€™s necessary to make a change here. So, even as presenting this as a proposal, my initial reaction is that we donā€™t need this. + +MM: Okay, great. Thank you. + +KG: I disagree. I donā€™t think this is much of a solution at all, in particular, even if the author of the class is maximally disciplined, this only saves you, if the consumer of the instance is in fact using the class instance at some point in the future. So not only do you have to change every field on the class to getter so you can see the field is accessed without the instance being registered for disposal, but you donā€™t actually get enforcement. You do sometimes. You do perhaps even most of the time, but if you just happen to make one of these things, and don't end up `using` it, you will not get an error. You will just not get disposal. And even in the happy path where you do get the error, it's in the distant future instead of at the place that you actually made the mistake, which is when youā€™re getting the class. If acquisition of the error happened earlier in the cycle, something like this might be a reasonable solution, but as it is, it is not. + +RBN: So to one point that you mentioned, having to make all fields gets, if the field getters are only valid when the resource is not yet disposed, then what do you do with those fields after you dispose? Normally you value them to some invalid value. You start them with invalid values with a private field holding on to what you need and then when you do the getter, thatā€™s when you set it. ā€“ usually with the disposable resource you donā€™t have that many fields that youā€™re concerned about, itā€™s generally the performance on the resource. + +KG: The point of turning fields into getters itā€™s to ensure that the user is using the class in a disposable, i.e. `using`it. So, you really do need to make all of the fields into getters to enforce that. + +RBN: So, if a field having an invalid value after disposed, is considered a valued state, then a field having an invalid value before you get the dispose method is just as invalid. + +KG: The problem isnā€™t the field having an invalid value, the problem is the user using the class instance despite not `using` it for disposal. The point of the getter pattern is to enforce that itā€™s using it as a disposable and the way that youā€™re enforcing that is youā€™re making operations on that instance through so that you can sort of do a second best check the, is the user using this in the correct way, but if youā€™re trying to enforce that property you need to enforce it for all of the ways you use that instance, which includes field access. + +RBN: Again, I would consider that ā€“ not having registered or access `Symbol.dispose` is considered invalid state, then so you would want those to throw anyways. + +KG: Theyā€™re just unrelated. + +RBN: I donā€™t think that they are. Youā€™re talking about accessing a resource and invalid state. + +KG: I agree that accessing the resource and invalid state is a thing that you generally want to discourage, but itā€™s not generally worth making every field into a getter. Thatā€™s a fairly expensive thing to do. However, preventing the class from being used without registering it for disposal is a much more important problem. Youā€™re right that you donā€™t necessarily need to make them all into getters, but this isnā€™t solving your problem because now people can use your class instance without registering it for disposal, which is your point. + +CDA: Just want to note that we have a few more items in the queue. + +RBN: I understand your concern ā€“ I ā€“ this may not be necessarily the most comprehensive solution, but I do believe that it covers the majority of cases and I donā€™t necessarily - itā€™s poor performance maybe to require all of these things to become can getters, but itā€™s an opt-in mechanism that youā€™re wanting to employ. Perhaps a `Symbol.enter` is for that case, but this covers the majority of cases that are going to be concerned with this. + +KG: I do want emphasize that this doesnā€™t give you enforcement at all unless had the user happens to use the class instance. + +SYG: I think I just donā€™t understand how `Symbol.enter` enforces. First of all let me clarify to make sure what it is supposed to be enforcing. I thought what it was supposed to be enforcing is at the using site itself, um ā€“ sorry, if you call a thing that runs a disposal, it should warn you if itā€™s not in a using or past two disposable stack or something? Um, how does `Symbol.enter` enforce that? Can you walk me through front to back, like mechanically how that would work? + +RBN: So, as I said earlier, um, the way the mechanism would be employed is that an ā€“ an API that requires or prefers strict enforcement would return an object that contains only a `Symbol.enter` method runs the actual resource, if you call that using const. You have to call a simple method is not really a thing people like doing. Itā€™s much harder to write. Itā€™s adding an explicit indication that youā€™re doing a wrong thing. + +SYG: The analogy here is that in C++ thereā€™s an annotate called ā€“ thatā€™s the use case weā€™re solving for here, but thatā€™s a compile time, parsing time instantion. + +RBN: That requires a compiler. + +SYG: Exactly. That can solve the use case and I understand the use case. That `Symbol.enter` thing, you canā€™t enforce that to locally warn the user they did something wrong at the contest time. Itā€™s still whenever they touch that binding for the first time and realize this is not what I was expecting, then they get an error which could be arbitrarily away from the time, correct? + +RBN: The issue here, or what this ā€“ supports is something similar in the, a way that doesnā€™t require a type system in that it makes more complicated the mechanism of actually doing anything useful with the resource. Such that if you tried to just use it, if you donā€™t have a design-time system and youā€™re just calling the API, you get the object and you think you can do these certain things with it and dispose of it at the end, and that doesnā€™t work, so those become early indicators that youā€™re doing something wrong. If you do want to use the thing and you explicitly call simple dot enter, itā€™s a manual opt out. Itā€™s the slash slash able ā€“ of a strict enforcement mechanism and thatā€™s the case where I might want to build a disposable building block. I might want to directly interact with the API my self and ā€“ and that might be somehow a magical part of this system that you have to check in the method and can throw exceptions. Thereā€™s no way around it in those cases. And so this thing is the ā€“ kind of a best ā€“ best case mechanism for doing this in the JavaScript world that kind of achieves the minimal goal of saying, hey, I really want you to use, um, a using statement or a `DisposableStack` to actually work with this resource by making it much easier to use, using statement or `DisposableStack` because those will automatically unwrap it for you. Just like people will more use for of, then symbol dot iterator then call next. People will do at the and we give them escape hatches to use that, but you donā€™t next over an array, it has to call into those APIs to work with it. I originally changed these slides so it's not "enforce", but "stricter enforcement". Weā€™re trying to guide people using disposables ā€“ and have an out for people that need to be able to build on top or call directly those APIs. + +DRR: This was a clarification to say, itā€™s a runtime enforcement. You must go through the method to get the thing that has all of the functionality and most people will want to use it to do so. You can go through the methods, like RBN said. + +GB: Iā€™ll keep it brief, but just to say this solving a very real problem for resource acquisition where you want to ensure that resources are explicitly dropped if you want to make the assumption and ā€“ then we need this proposal. Under the guarantee that the symbol dot enter is being called synchronously immediately after the resource has been acquired to be able to give you a valid resource that we then know is going to be explicitly dropped. The get on Symbol.dispose` feels as though itā€™s not as concrete and for example if other methods want to check if the dispose exists, they could inadvertently trigger it. This strong guarantee in the protocol level feels like it should have been in the original and if itā€™s still able to up stream, we should do that, I would support that or go to stage 2. + +RBN: The majority of APIs that would start using this when resource management gets to stage 4 wonā€™t be using this functionality. All of the existing APIs would not be able to do this without breaking code or bifurcating, which I donā€™t think is a valid option or a valid forward and this comes on later as a mechanism for people to have that stricter level of enforcement. If they want it. So I donā€™t want to negate resource management because thereā€™s a long tail of building APIs and getting those if into user hands if weā€™re not necessarily waiting for this to advance. Now if reaches the point that this is in stage 3, we could implement it. + +CDA: We are at time. Apologies to the other folks in the queue. We just donā€™t have enough time. RBN, did you want to ask for stage advancement? + +RBN: Yeah, let me get to that. So, first the initial question I had whether the existing work-arounds would be considered a viable alternative to this. It sounds like there are some concerns from KG that is the case so Iā€™m not certain thatā€™s an option going forward. Iā€™m not sure if KG can respond if his concern is a concern that requires this proposal to exist. + +KG: I canā€™t require a proposal to exist, but I can say that, despite the alternatives you presented, Iā€™m still very interested in exploring something better in this space. + +RBN: Should I say, rather than requiring, would you like to block stage 3 or 4, or request it be demoted to stage ā€“ + +KG: No ā€“ resource management is already at stage 3. + +CDA: We need to move on. + +RBN: All right, so the last thing that I would is Iā€™m looking for ā€“ if this is necessary, and maybe we can talk about whether this can be abandoned after the fact if itā€™s not ā€“ is to advance this to stage 1 or stage 2, considering I have all spec text as well. + +MM: Stage 1 is problem statement, and further examination of the is fine, and Iā€™m fine with the getter so I would definitely object to stage 2. + +RBN: Fine with me. + +CDA: Support there from Mark for stage 1. JRL, in the notes, Justin supports stage 1. Keith Miller stage 1 is okay, not stage 2. Um. + +RBN: That works for me. + +CDA: It sounds like you have stage 1. + +RBN: Are there any ā€“ anyone opposed to advance.? + +WH: I also support stage 1. + +CDN: Okay. Not hearing or seeing any objections to stage 1. RBN you have stage 1. + +### Speaker's Summary of Key Points + +- Introduces an opt-in stricter enforcement of `using` and `DisposableStack.prototype.use`. +- When present, `using` (and `use`) invokes a `[Symbol.enter]()` method whose result becomes the actual resource to guide users to `using` to avoid the ā€œstumbling blockā€ of manually calling `[Symbol.enter]()`. +- Opt-in as mandatory enforcement would complicate adoption in DOM/NodeJS/Electron/etc. +- Enforcement alternatives exist incl. `FinalizationRegistry` and turning `[Symbol.dispose]()` into a getter. + - Some considered alternatives not strong enough. +- Some committee members concerned the suggested ā€œstumbling blockā€ approach does not provide enough guarantees. +- If adopted, pursuing as a follow-on proposal to Explicit Resource Management due to opt-in nature. + +### Conclusion + +- Adopted for Stage 1. + +## Stop Coercing Things pt 4 + +Presenter: Kevin Gibbons (KG) + +- [slides](https://docs.google.com/presentation/d/1aumShXqYgQV38Bg_L3FfJvGKIupJVCxs_C-Iz3r_tRE/edit) + +KG: Alright. I am coming back with part 4 of this proposal. Or, I shouldnā€™t say proposal because itā€™s not a proposal in the traditional sense that we mean in the committee, but this suggestion for a guideline for future work. So, in case you havenā€™t been paying attention to the previous proposals, a brief recap. My thesis of passing things of the wrong type is almost always a bug and bugs shouldnā€™t be allowed. Thatā€™s all Iā€™m trying to get us to codify, and to codify the details of what that means for our various APIs. The demonstrative example that I have here is `['a', 'b', 'c'].at(true)` Iā€™m sure at least some people on the committee know, this will get you `b`. Thatā€™s not a thing that people should have to know. If you like accidentally end up with true instead of a number in your variable that youā€™re passing as the argument to `at`, youā€™re going to be very confused when you get B out of this. Especially if you're expecting only zero or -1, letā€™s say. And I donā€™t think we have to keep doing this. Precedent is good, but as they say itā€™s not a suicide pact. So for sufficiently bad things itā€™s worth looking at breaking with precedent. + +KG: So, weā€™ve talked about a few things already and gotten consensus for the things on this slide. `NaN` is not treated as zero, which I think is particularly deranged. Not treating undefined as anything else. Not rounding numbers. Not coercing objects to primitives other than booleans. And concretely when I say, stop doing these things, that means in future APIs when you receive one of these things and youā€™re expecting the other, so letā€™s say youā€™re expecting an integral number, then you throw a type error if you got a nonintegral number. And I have a draft document to make this concrete. + +KG: As a reminder, these are not intended to be absolute rules. If you have a case where you think it makes sense to do something different for a proposal, itā€™s fine to make the case for this. Iā€™m just trying to set the defaults for proposals going forward. + +KG: So whatā€™s left to talk about? I think mostly just these two things. We started discussion of the first one [on the slides] during the last talk, but we went back and forth on exactly what it should look like to not coerce between primitives. I originally presented complex rules and in committee there was support for being more restrictive than I thought there would be, and support from not coercing between primitive types at all, except boolean which is a simple case and it never invokes user code. And the second thing is donā€™t coerce primitives to objects, or at least for options bags. Thatā€™s not so much a change from what weā€™re already doing, but I do want to make sure that we codify that. Weā€™ll start with the primitives other points. Iā€™ll have some examples of cases that I think are silly, but because the full matrix of each primitive type to each primitive is quite large Iā€™m not going to have an example of every case, just cases that I think are representative. For example: parsing null `parseInt` gives you pretty silly results: you parse the string "null". Or you can index an array with ".at(null)" and get the first element of the array. + +KG: Similarly with boolean: parseInt(false, 36) parses the string "false". This is not reasonable behavior. Iā€™m not going to go through other examples, I think you get the idea. But there are some cases that are more sensible. There are numeric strings that you might consider reasonable pass to a numeric taking API. Iā€™m suggesting that even in the sensible cases we reject these so the rule will be simpler. Itā€™s not hard for the coder to do it themselves if they want. + +KG: Iā€™m going to go through the second suggestion before getting to the queue, but thatā€™s the end of the first suggestion. So, the second suggestion, not coercing primitives to objects, I hope will be simpler, especially because thereā€™s already some precedent for doing the thing that I want us to do, which is saying if you have a position which takes an object, especially an options bag, primitives should be reject. You shouldnā€™t look up the "alphabet" property on a string, letā€™s say, and potentially get that property from Number.prototype. ECMA 402 has already codified this rule a while ago; previously this wasnā€™t the case and there were various APIs that didnā€™t do this enforcement for options bags and 402 switched to doing this enforcement on new APIs and I think thatā€™s the right call. I want to codify it for 262 as well. + +KG: for non-options bag taking positions where you take an object, for example, you take a `Set` as the argument for `set.union`, I think also rejecting things that which are not already objects is correct. Thereā€™s one last questions about, what do you do with iterable taking positions that I donā€™t want to get into unless we have more time to kill. Yea, what do people think? Thereā€™s nothing in the queue + +CDA: SYG has entered the queue + +SYG: Sounds good. + +MM: agree. + +PFC: Also, + 1. Especially the no primitive to object portion. We did this in Temporal. I cannot think of a possible use case where you would want to put options properties on Number.prototype or String.prototype and get them off there by passing a number or string as an options bag. + +CDA: Thereā€™s a plus 1 from DE. You have plus 1 from DMM. Looks good to me from MF. You have a plus 1 from JHX and a plus 1 from Tom ??. + +KG: All right, well, I was definitely expecting more discussion here. Iā€™m glad we have been so efficient. I suppose that leaves the option for my bonus slide, which is, should iterable taking positions accept strings? We are currently not 100% consistent about this. In iterator prototype flatMap in the iterator helpers proposal we made the decision to say that flatMap has to return an iterator or an iterable, and strings donā€™t count. The behavior of wanting the characters of a string spread is so unusual that we decided that thatā€™s something that users should have to opt into explicitly by calling the string iterator if they really want that. And for the coercion method for `Interator.from`, the static method we decided it should take string because itā€™s a coercion method and you can for-of over a string. So the question that I have for the committee is how do we feel about the thing on the slide? Do we want to take strings in iterable taking positions going forward? Strings being the only primitive which is an iterable right now. We could instead say that iterable positions accept objects and be done with it. + +CDA: Jordan? + +JHD: Yea, Iā€™m in agreement. I think iterable strings was a huge mistake. If passing it to iterator dot from and itā€™s annoying then we should do what should have been done in the first place and at a dot points method to strings and Iā€™m very loud support of this. + +CDA: Nothing else in the queue. WH? + +WH: Iā€™m fine with the iterable direction. I have a different concern, which is that Iā€™m starting to see resistance to *explicit* coercions where the user explicitly wants a coercion and we donā€™t want to provide that. So I want to make sure thereā€™s nothing in this discussion that prevents us from defining explicit coercions. + +KG: Iā€™m not sure exactly what resistance you have been seeing, but certainly I think that explicit coercion is fine and having methods like an Iterator.from and getting an iterator out it is fine. + +WH: An example from earlier today is, there was some opposition to providing explicit coercion between Numbers and Decimals. + +KG: Ah. Yes. Well, I donā€™t have an opinion on that one. But I think that we should certainly not have a policy of opposing explicit coercion in general. I think that if in specific cases someone has a reason to not want explicit coercion in that specific case, it is reasonable for them to make the case for that, but the default should certainly be to provide explicit coercions when sensible. + +WH: Okay, thank you. + +JHD: I think I was going to Echo everything KG just said, the best way to limit explicit coercions is when they exist, and banning explicit is counterintuitive to banning implicit conversion. + +MF: So, I didnā€™t have time to review this change beforehand. I think I agree with the change. I would like to be able to survey existing uses of iterator APIs, most of which Iā€™ve probably added, just to make sure that all of these places certainly wouldnā€™t want strings passed. Additionally itā€™s kind of on JHDā€™s point, it might be easier to agree to this design principle if we had a `codePoints` API on String.prototype so that we could fully pretend strings are not iterable. Right now if we agree to this and you wanted to iterate a string, you would have to pass it to `Iterator.from` or spread it or something, and youā€™re admitting at that point that strings are iterable, whereas a `codePoints` method can fully pretend it didnā€™t exist and I think that might make it fit better. But I think I would be okay with this, even without that, I just think that it gives more motivation to pursue that. I just want a little bit more time to review the APIs first. + +KG: Yeah, that sounds good. I would be in support of such a method either way. And, perhaps we can revisit this precise question the next time that there is an iterable taking API up for discussion. + +CDA: JHD. + +JHD: Oh, Iā€™d be happy to write a quick proposal for that if this is a blocker, but this is not a blocker for this. We can talk about that later. + +PFC: I donā€™t think I have anything to add to what Michael said. I think he said what I was going to say. + +DMM: Well, I donā€™t have a lot to add. I think a way to ā€“ you know, something like code points on string would be excellent. We might also want to consider that code points may not be enough, you might want, um, oh ā€“ glyph or whatever it is, the term for the actual visible components of the string. That seems to be a common thing that people actually need and would be worth considering. + +KG: Okay, so, I am going to ask for explicit consensus for a guideline, a nonbinding guideline for just these two things on the slides, with the caveat in the first one that we should in future APIs (except for close cousins of existing APIs) we should show type errors if given a primitive of a type that is different from what we are expecting, if weā€™re expecting another primitive or if weā€™re expecting an object. And I'm not at this time asking to codify the iterable change, but we have some support for it and ā€“ weā€™ll probably revisit the topic next time thereā€™s an API taking up for discussion. So, just these two changes. We heard lots of support, so Iā€™m not going to ask you to support again, Iā€™m giving people a chance to object if they donā€™t like this direction. + +CDA: We have a plus 1 from MM. Nothing else in the queue. + +KG: All right, hearing no objections, I will take that as consensus and I will get the document for how-we-work updated. + +### Speaker's Summary of Key Points + +- KG proposed stopping coercion (for new APIs, as a nonbinding guideline) from: + - null -> anything + - boolean -> string + - boolean -> number + - primitive -> object in the case of options bags [already the case in 402] +- There was discussion about not treating primitives as iterables, but this will be revisited in the future, and no decision was made. + +### Conclusion + +- Consensus on the non-binding guidelines proposed in the slides + +## (continuation) joint iteration: confirm our stance on #1 + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-joint-iteration) +- [issue](https://github.com/tc39/proposal-joint-iteration) +- no slides + +MF: I hope everybodyā€™s ready to jump right in. So, just a reminder, we are considering whether array needs to be part of joint iteration and arrays are currently supported by Iterator.zip through the iteration protocol, and this would be a way to shortcut about the iteration protocol and zip arrays through indexing. If we decide that it is fully motivated, it doesnā€™t make sense for it to have to go through its own separate proposal process. If weā€™re unsure about that and we want to argue for its motivation separately, we can argue in a separate proposal. I also have the saved queue from last time, if we want to seed the discussion with that, or if you want to add yourself to the queue, go ahead. + +CDA: MM is okay either way + +JHD: Yea, I mean if itā€™s not included in this one, I would love to just start out the gate with a stage 1 proposal for arrays, but that feels like process overhead to me and Iā€™m just going to be like copy and pasting and everything and then tweaking some spec text, but, yea, I mean, either way is fine, as long as the outcome is that both approaches end up in the language. + +CDA: All right. Nothing else in the queue. + +KG: What were the queue items from last time? + +MF: Let me open up the saved queue. One second. Okay, we had NRO saying that the symmetry between arrays and iterators is already broken because take and drop are not technically slices. JHD said that it can go on Array, not Array.prototype. Thatā€™s fine. KG says ā€“ same as NRO. SYG wants to tease apart the performance ā€“ so it seems like SYG may not be convinced about motivation here and KG said mental model of readers versus writers. + +KG: Yea. I vaguely remember my point there. Unless SYG wants to go first. + +SYG: You go first, Iā€™m not sure I remember what I was thinking. + +KG: Okay, so, yeah, this is a, um ā€“ JHD I believe was making a point about wanting this method to exist because it fits with the mental model better rather than having to do an iterator dot zip followed by toArray. And like it being large conceptual overhead to do stuff with iterators. And I am not convinced that this is necessarily a concern for people who have reading the code. I think if you are reading the code that says, iterator.zip.toArray itā€™s quite clear whatā€™s going on. Thereā€™s not really much mental overhead to that. So Iā€™m not convinced that the mental overhead argument is sufficient in its own to justify adding the array methods. + +JHD: Yea, I mean, I ā€“ I donā€™t think I was trying to make the argument that people wouldnā€™t understand that itā€™s happening, itā€™s unnecessary boilerplate and if they could just do .zip or Array.zip or something, that it, I think that would be clear. The other thing that I think would be a nice property would be, as much as possible, if fine if not every case, but to switch code back and forth between arrays and iterators when you need the laziness. Iā€™ve definitely refactored stuff to use iterator helpers because it looks cleaner than what I was doing before, but Iā€™ve had to reverse some of them because the performance was not good in some of them and itā€™s nice to do that smoothly and not completely rethink the way Iā€™m ā€“ you know, the way Iā€™m writing the code, but yea, I agree the iterator form is understandable. + +KG: I do want to also emphasize that, you know, the symmetry is already a little bit broken but Iā€™m expecting it to be broken a little bit more because thereā€™s a few other things - takeWhile, dropWhile, chunks, windows - that weā€™re never going to be able to add to arrays. So, my hope is that weā€™re going to be able to add to iterator, but that means theyā€™re going to be more and more utility that is not present on arrays. This one we can probably add to arrays, but I donā€™t want to count on them having the same functionality going forward. + +JHD: Yea, I think it would be unreasonable for me or anyone to insist on every iterator method to be an array method, but I think many of them will be appropriate. + +SYG: I remember somewhat the queue item I had previously. I wanted to be explicit for the array methods, um, as a general thing, like not specifically for zip, but I donā€™t think it is true that, um ā€“ like itā€™s not a goal to have array things mirror iterator things because array things are faster. Like the performance characteristics are just different. I think weā€™ll be doing a disservice with performance if we have a catch all. We should have a version of zip on array because itā€™s more performance. Thatā€™s not true. Itā€™s more performance in different cases, youā€™re making certain trade-offs and I want to be explicit that whatever proposal comes out of this, itā€™s called out, itā€™s not a general thing. The performance between iterators and arrays are really that comparable. + +CDA: We have 3 minutes left. Reply from JHD. + +JHD: Yea, um, so yea, I mentioned performance every now and then, but itā€™s just not really a primary motivator for me, but arrays and iterators are conceptually both lists of things and a lot of the operations that apply to one apply to the other for me. + +SYG: Like I think it behooves people to think about the difference. Like if you want to zip giant things you probably donā€™t want an array, right + +JHD: Yea, I mean there are some use cases where you might want one over the other, of course, but that doesnā€™t change the conceptual operations you might want to perform. + +SYG: Um, okay, I think ā€“ I donā€™t entirely for a nuanced reason but I donā€™t want to take up the time here. + +MF: Iā€™d like to reclaim the last minute to do a summary. What Iā€™m hearing here is it wouldnā€™t be a blocker in either direction whether array zip was included or excluded to joint iteration in an upcoming meeting. JHD, please sync up with SYG on the tracker to figure out your differences. If thereā€™s no concerns about motivation for array zip, because I havenā€™t heard any other concerns for array zip, Iā€™ll go ahead and include it and if thereā€™s still questions remaining, Iā€™ll go ahead without it. Sounds fair? + +JHD: Thanks. That sounds fine with me, and Iā€™m happy to do the work. + +MF: Iā€™m happy to do the work. Iā€™m super neutral on this topic. I just donā€™t want it to jeopardize the iterator methods going forward. Thatā€™s my concern here. + +JHD: Understood. + +MF: Okay, thank you. + +### Conclusion + +- It wouldnā€™t be a blocker in either direction whether array zip was included or excluded to joint iteration in an upcoming meeting. +- Discussion to continue on GitHub from https://github.com/tc39/proposal-joint-iteration/issues/1#issuecomment-2077815265, including among the opinionated parties, JHD (who has been arguing for array zip) and SYG (arguing against). MF is OK with either outcome. + +## (continuation) Make eval-introduced global vars redeclarable + +Presenter: Shu-yu Guo (SYG) + +SYG: Oh yes, I would like to do this. Real quick. Shouldnā€™t take more than a minute. I will test the 262 PR. This is the, um, proposal for allowing global vars that were introduced via sloppy direct eval to be able to declare lexicals, or the other way around. It reached Test262 PR opened and was approved by Phillip C, and with that requirement satisfied I would like to now ask for stage 3. + +WH: Sounds good. + +CDA: Um, so you have a plus 1 from MM for stage three. + +CDA: So you have support from WH, DE, DLM and LGH. + +SYG: All right, sounds good. Thanks. + +CDA: And any objections? ā€¦ No. + +### Conclusion + +- Stage 3 + +## (continuation) Extractors + +- [proposal](https://github.com/tc39/proposal-extractors) + +RBN: We might be able to get through some of the remaining queue on that. I do have a question though. We also had talked earlier, um, when I presented on the deterministic collapse of a weight that required some review by, or that Nickoli wanted to look at the changes to the PR based on the discussion before determining whether that reached consensus and I donā€™t know if we can address that because I think thatā€™s fairly short? + +CDA: Noting that NRO is not present. + +RBN: Oh, is he not? I believe he signed off on that, but if thatā€™s not something we can move forward, then we can talk about extractors. + +CDA: Um, itā€™s your call, but it seems like ā€“ if heā€™s not here, then you probably want to favor extractors. + +RBN: All right. I think thatā€™s fine. All right, um, let me check. Do we need to post the queue somewhere for discussion to continue that? + +CDA: Yea, um,? + +RBN: there were two replies, one from WH and one about throwing matches, which was more for pattern matching than this, but I donā€™t know if we want to meet with that. WH do you have context? + +WH: I was +1 to whoever was speaking at the time. And I now donā€™t remember who that was. + +RBN: All right. Um, and then the other ā€“ the second one was reply that talked about throwing for failed matches. It seemed like a nonstarter for KM. + +KM: Yea. Um, I think I was understanding the context of the situation, um, so I donā€™t ā€“ donā€™t worry about that for now. + +RBN: This might be something to discuss in the pattern action group because we were talking about the, um, refutability of match expressions and any case that didnā€™t match on the match expression or if you have no cases in the match expression it would throw, and I think thatā€™s more for pattern matching this proposal. + +MM: The proposal that I was hinting is the that in the argument that is part of this proposal, most of the values come from pattern matching that it also, it also states whether the match is happening in the context where a failure would cause a throw, the matcher is able to fail rather than throw, but it could throw ā€“ More diagnostic. + +RBN: I think we did discuss that and I think thatā€™s something we can consider. + +MM: Yes. + +RBN: Okay. And then, letā€™s see. The next topic that we had was, um, concern about precedent for design choices that we know are inefficient from Dan minor. + +DLM: Thank you. And I wanted to thank you for the slides you put together for this. I was a little bit uncertain about this proposal when you first visited on the last plenary and I thought you did a very good job of explaining the feature and kind of demonstrating the symmetries with the ways of destructuring and youā€™ve also been responsive to our concerns about, um, using iterator protocol for return, just because, we know this is something thatā€™s currently slow. In this case it seems like we probably have some ways of optimizing it, I just wanted to register a general concern about establishing something that we know to be slow and hoping that the engines will be able to optimize those in the future because it could be a way of backing yourselves into a corner in the future. In this case I donā€™t have a specific concern about how this is being handle because you have a fall back if the there problems, I just wanted to raise this for the future. + +RBN: So, I believe next topic is: WH - line breaks in function calls? + +WH: Yes, this is a rather thorny topic. The proposal, as it stands now, changes the behavior of existing code in incompatible ways. To fix it, weā€™d need to prohibit line breaks in function call expressions, which has its own adverse consequences. And I donā€™t like either of those options. + +RBN: I think you would have to provide some additional context as to what the specific error is. + +WH: Okay, the issue is, if you have: + +```js +let x + (a) = b +``` + +The proposal as it stands would change the behavior of that code into an extractor call where thereā€™s no extractor before. + +RBN: One of the reasons ā€“ so youā€™re talking about assignment expressions? + +WH: The scenario is this: + +```js +let x + (a) = b +``` + +RBN: I donā€™t think that would be valid syntax. Maybe it would be, but I think ā€“ we would want to preserve existing syntax regardless so if thereā€™s changes that need to be made to grammar for an inconsistency, I woulded appreciate if you could file things to adjust those so we donā€™t run into those that ā€“ something that is working. I definitely think we should look into that. + +WH: I was anticipating that you would add a no-line-break restriction. + +RBN: Yes, we can ā€“ I think we could add no line break. Thereā€™s only specific cases where itā€™s necessary, but I think thatā€™s something we can investigate. Yes, thank you. + +WH: And thatā€™s where the problem arises because now weā€™d have a no-line-break restriction inside some function calls, which makes function calls inconsistent depending on where they appear. Thatā€™s a usability issue ā€” + +RBN: I understand that if you could again file an issue on the issue tracker with more context, Iā€™d like to look at that and if thatā€™s something thatā€™s considered a stage 2 blocker, thatā€™s fine. I would like to make sure we have that resolved. + +WH: Okay. I also have a question about the elisions where you allow elisions inside the parameter lists assignment extractor function calls but the cover function call grammar doesnā€™t allow those, so I donā€™t know what the intent there is. + +RBN: Iā€™d like to not allow elisions. Thatā€™s what the discard bindings proposal would do. + +WH: Okay. So the spec is in error there at the moment? + +RBN: Yea, itā€™s more of a, need to mix both of these positions together to make sure we have something that works. + +WH: Okay, Iā€™ll file an issue to get the spec fixed in that case, thank you. + +RBN: I appreciate that. Thank you. The last topic that I have saved was from SYG "remains concerned about runtime performance, thanks for being open, now also concerned about complexities from cover grammars." + +SYG: Yea, the first part was just about the iterator proposition call, and youā€™re open that so I donā€™t need to speak much to that. The cover grammars thing is this ā€“ any cover grammar adds a bunch of complexity to parsers because you either have to do backtracking or if you donā€™t want to do backtracking you have to, um, keep all of your state so that you can make a decision for eventually when you can make the disambiguation choice. So, along the lines of what Dan Minor was saying this is general concerns that the ā€“ like we talked about syntax budget as, um, a readability budget for the reader. Thereā€™s also the complexity budget for parser implementations that arises from having a bunch of cover grammars and activity some pattern matching adds more and weā€™re pretty concerned by the combination of both. Cover grammar is probably not much as itself, but this is the use case of pattern matching the cover grammar explosion worries us. + +RBN: So, the ā€“ the thing that I think of for the cover grammar in the call it expression case is that we ā€“ for the most part already parse out what would have been legal any ways in the assignment expressions. For variable declarations and binding patterns, Waldemar mentioned, with ā€“ a few exceptions, on the leading for constant let, the leading declaration, once you encounter, option dot sum, once you see option dot, youā€™re out of the case where it could be anything other than what is expected. Itā€™s that continent space being identified is the issue there. We already have a ā€“ a cover grammar for call expressions for handling async functions. + +SYG: Yes. + +WH: Itā€™s more a matter of making those things work together. I do plan ā€” because thereā€™s a couple of these cover grammars that need to come together ā€” on taking a longer look at that. + +CDA: Okay, we're at time, RBN. + +RBN: Iā€™d like to potentially advance to stage 2. I know thereā€™s most likely blocking concerns. Advancement for stage 2 and then we can discuss if thereā€™s any blockers for stage 2. + +CDA: You have a plus 1 from stage 2 from MM and EAO. Do we have any other support for stage 2, or do we have any objections to stage 2 for extractors. + +WH: I would like to resolve concerns before we advance to stage 2. + +RBN: I appreciate that. Thatā€™s fair. Thank you. + +DE: Sorry, could we briefly enumerate which concerns, just for the conclusion are especially important for stage 2? + +RBN: I think the biggest concern was stage 2 blocker that WH raised was the issue around no line terminator in a function call being problematic or the need for NLT to make things work and what problems that entails. We need to work out that syntactic issue first. + +DE: Great. Are there any other concerns that people feel need to be resolved before stage 2? Asking the whole committee. Because if itā€™s limited to that, thatā€™s great progress. + +WH: The other concern was the cover grammars which was raised by Shu, which I share about the new cover grammars. + +SYG: I was thinking of them as more resolved during stage 2. Like itā€™s not ā€“ like stage 2 is not a free pass. So, like I think at this point ā€“ itā€™s a thing I would like to resolve. I donā€™t see this as a stage 2 blocker. + +WH: Yea, I would agree with that, but I just wanted to note this is a concern. + +RBN: Yea + +DE: Good. And we also have the performance concern noted. So, good. + +### Speaker's Summary of Key Points + +- WH noted the cover grammar does not support ellisions, but they are included in the refined grammar. +- WH noted an NLT issue that must be addressed before Stage 2. + +### Conclusion + +- General consensus on proposal direction. +- Did not advance due issues in grammar that must be resolved before Stage 2. + +## Signals for Stage 1 + +Presenter: Daniel Ehrenberg (DE), Yehuda Katz (YK) and Jatin Ramanathan (JRA) + +- [proposal](https://github.com/proposal-signals/proposal-signals) +- [slides](https://docs.google.com/presentation/d/1MJqndTS5RmTEwTbtLTPsEloc-a_MWR8daQINgDim2RA/edit#slide=id.g1f570b058be_0_573) + +DE: So, um now we have, um, I think, CDA you were saying the signals proposals. I think JRA is going to put up the slides and start presenting. Weā€™re also joined today by, um, a number of newer delegates and observers who have been developing this proposal together. + +JRA: All right, excellent. Um, so yea, thanks for having us. DE, me and YK are presenting signals and we are here to request stage 1. So, this is roughly the topics that we want to go through. Just an introduction for folks who might be unfamiliar and with them and how signals can help organize UIs and Daniel will talk about the standards for utilization and some efficacy. So, letā€™s get into it. What are signals? Signals are a reaction primitive that we are proposing we ad to the language. Theyā€™re variables with tracking and the thing theyā€™re tracking is every place where the variable is used. Typically we refer to them as computations and so computations use variables and a computation at the end knows all of the variables that it accessed during the computations. And finally you have side effects which are arbitrary pieces of code that can depend on signals or computations and these side effects will execute only if the signals are computations that they depend on are invalidated. So, this is simple. I want to go through a simple example before we dive into more API details. So, hereā€™s some code which demonstrates like, you know, a variable counter, some simple computations like computing if itā€™s even and turning that into text and finally rendering that text. So, Iā€™ll give people a few seconds to go through this and read this. + +JRA: Okay. So, thereā€™s some problems with this, right. The first problem is the highlight render call is probably the worst problem at all. When you update the variable, you need to know which parts of the API need to be updated and the data code needs to be updated and incomplete like parody will read them, even if their down stream has not changed. Every time you call renderer, parody is going be called on, and then you have issues around composability, like if the UI depends on multiple signals or no longer depends on a signal. So you have to do a lot of manual bookkeeping to make this work and this is not a great way to write this code. So you have essentially, the text node, which depends on the computation parody and which depends on [ Indiscernible ] and depends on ā€“ so the terminology that weā€™ll use is weā€™ll call those variables like counter state and weā€™ll call those intermediate states as call you tents and the node that actually depends on the computations and the state is the effect. So, we have three kinds of nodes and weā€™re trying to build up this graph of computations that allows us to keep our data in sync with whatever side effects, the most popular side effect being updating UI. So the point of signals is if you have a slightly more complex node like this ā€“ a graph like this one, like two computations, C1 with C2 and letā€™s assume that the C1 computation is really expensive and all that the user did was they updated, for some reason by some way the S3 signal. So when S3 changes the E1, effect needs to run, both effects need to run because they depend on S3, however when the E1 runs, you donā€™t need to read on the expensive C1ic and this is where autotracking helps because autotracking is building up this cache key because we know it needs to be validated when something in the key changes. The system can take advantage of this. So, with that, weā€™ll just look at the API. The API for signal is essentially we call this class called state which has a getter and setter and when you use the getter, the autotracking kicks in. The next one is computed. Computed doesnā€™t have a setter since computers are pieces of logic that are represented by the call back that read on any time their down stream dependencies change. We have an options bag that contains, among other things, a way to customize the eels or the comparison function + +JRA: And finally, we have a bunch of `subtle` APIs, we call the `subtle` namespace where weā€™re trying to collect a few things that are required to make the system useful and useable across various ā€“ various libraries that need to depend on it. Possibly the most interesting `subtle` API is Watcher. Watcher is a foundation over which you can build effects. So, effects is not being proposed right now as part of the proposal for a pretty important reason and Iā€™m ā€“ Iā€™ll try to get into that here. So, letā€™s visualize APIs. Our developer is interested in using signals and they look at the proposed APIs so the first thing it does is to try to make their data model and so they start using the state API to declare the different rainbows and they have derivations in the data model where you want certain slightly complicated more pieces of information that depend on other states and then they lastly they start using a library and the library provides the effect API and the effect API presents like something that knows how to use call backs that are given to it using an appropriate scheduling strategy. So, depending on the problem space, right, depending on whether youā€™re updating a UI, these callbacks that need to be re-executed can be optimized in various ways. We donā€™t believe that the language can actually make that decision in a generic enough way so that itā€™s applicable to all use cases. In stead, different use cases might want to bring their own scheduling or execution strategy to finally decide when and how the effects should be executed. So, watcher really provides the foundation to that. In this diagram, the blue boxes are the languages that will provide and the green box is that either the library could write or they could pick off the library to use and state computed. With that, I will hand over to [ Indiscernible ] to talk about how signals can help us manage UIs. + +YK: Hello. Hello, everybody I havenā€™t been here in a while, but Iā€™m excited to be back. Can you go back one slide? So, next Iā€™m going to bring the abstract data concepts that they just described by how users would ā€“ and signal reactive data from frameworks, but still integrate deeply with the frame workā€™s rendering. Next slide. So, first Iā€™m going to start by defining a reactive data structure. The signals are used under the hood and the user interacts with them under a normal JavaScript interface and allows them to develop libraries that are not coupled into frame works but, can be rendered into framework scheduled. Iā€™m going to talk through the? Validation and rendering flow for both preactive and review and both are going to use the same shared library to facilitate sharing and under the hood the integration currently uses a watcher API that is a subtle name space and if youā€™re interested in the full details check out ā€“ Iā€™m going to focus first on [ Indiscernible ] which is a ā€“ the show counter function is a normal ā€“ the return JX S describes what to enter ā€“ sorry, to explain whatā€™s going on, if youā€™re not familiar with pre-op, return JSX explains what to ā€“ and exposed by counter. Once Iā€™ve talked through preact, Iā€™m going to talk about how ā€“ how it works with view, text based frameworks which also interact with the ā€“ PS, a reviewer just put it out a few minutes ago that thereā€™s a subtle mistake in the [ Indiscernible ] after the incremental on click. If you notice it, you get a Goldstar. + +YK: So, overall, this is what makes signals so exciting to us framework developers. We started out in earnest last year with the development and weā€™re giving this presentation now because the frameworks feel pretty aligned on the details. Speaking as a framework author, itā€™s a hugely positive sign that Open Source frameworks feel that weā€™re aligned enough to invest resources into exploring and integration. Iā€™m going to use the hypothetical Preact integration to explain markets. This helped to flush out standard signals as a way of possibly ā€“ ultimately migrating to standard signals, should they become a standard. We created an instance of the counter ā€“ and get the same object back later. The component describes its unit ā€“ and interacts ā€“ the count property is a getter which gets the card value of the ā€“ it doesnā€™t interact and this is important. It can extract signals and it will still work properly with frameworks that work with signals. Iā€™m worried about my audio. I will ā€“ I will put it out of my mind. Um, okay. Next slide. So, the priority property is also exposed as a get but itā€™s also involved under the hood. Is it so bad that itā€™s impossible to understand me? + +YK: So the parity property is also exposed to a getter, but itā€™s also under the hood, itā€™s an even compute signal and ā€“ these computes add additional cache into the computation a ā€“ and you could use the function directly and this simplified example is a stand in for extensive computations that we would benefit from. After the first time the component was rendered, itā€™s initialized to zero and reflects that state and this is idle waiting to change. The on the right side state, and this basically, when people talk about autotracking, they are, the arrow visualize what people see. Next slide + +YK: When the user clicks on the increment button, the calculator button is called and changes the value from 0 to 1. Next slide. When the state, um, the validated, watchers will fire. This does not immediately update, and this is important, instead it gives the framework an opportunity to integrate with its normal process and this is when developers that are built on signals and not? ā€“ web frameworks. Now, next slide. Eventually preact will schedule a rerender, which will recompute, and because we have a re ā€“ schedule we donā€™t have to think about revalidation and we donā€™t have to worry about handling additional state changes that happen in between and whenever the [ Indiscernible ] counter dot count and counter dot parody and it will render accordingly. Itā€™s going to update and wait to be notified. So, um, that just about covers the basics of the integration but I wanted to emphasize one more thing which is the granular nature of signal and Iā€™m going to add another button that adds by 2 instead of 1. If we update by 2, we should avoid updating the parody label. Next slide. + +YK: So, Iā€™m going avoid boring everybody by skipping forward through the same state. We already implemented the counter by 1 and weā€™re now idling in a steady state. What happens if we click the + 2 button? Next slide. The step by step validation process basically behaves like before, we notify both of the outputs, but because the parody hasnā€™t changed, it [ Indiscernible ], it has to run the entire function, but other frameworks have syntax that allow them to take advantage of the change and weā€™ll see that when we circle back around. Next slide. As before, preact updates the output and weā€™re back to idling. And weā€™re starting to see the pattern. User results in state signal changes and preact takes it from there and segues into its normal rendering process. And when I say, user interaction, that also can include ā€“ other events that happen in the browser like async and [Indiscernible]. Next slide. + +YK: So, we showed how signals interact with preact in a way that reacts signals and thatā€™s kind of what he was talking about before when he was describing ship effects, basically this is the glue into effects. Now, I teased an example earlier, and now weā€™re ready to it, and weā€™re going to jump to the example, and it will be pretty quick since most of it is the same. The data slide is completely identical, which is the data site could be an empty library. That would work fine. Thereā€™s one piece of state and itā€™s set back to where we before, before we incremented by 2, so weā€™re at the same spot. At the surface level, the view syntax is pretty different, but it represents the same, thereā€™s a couple of labels that reflect the current state of the counter and buttons that represent the state. And I think this is part of an intuitive level why this works if we can find the right way to put these together. Next slide. So, clicking the plus 2 button increments the counter by 2, which is the same signal behavior that we saw before. So, weā€™ll skip through the propagation that we saw before. As before the change to the counter state notified watchers to the output nodes and hands that to view, and view would treat that as a signal, no pun intended, to schedule it for rerendering and then scheduled as normal. + +YK: And view status templates make it interesting instead of running the whole component from top to bottom, it has to ability to decide and take advantage of the validation cut off that happens and in general this is true of other syntax, in addition to Vue this contains Svelte, Ember, Solid, and others. Next slide. All right. So, finally, like as before, view updates, account label and weā€™re back to the races and I just want to take a moment before I hand it back to emphasize that what we described earlier, it allows us to fully share data code while full [ Indiscernible ] over the rendering process and universal libraries that are reactive without thinking about specific framework, and on the other hand this allows them to add frame works into their design models and lets them focus on what theyā€™re experts in and excites them and in general itā€™s a good decoupling. Working together to refine that compilation, is why theyā€™re excited about our current status and why I am, actually. Thatā€™s it for walking slow this speculative/hypothetic formulation and now Iā€™ll hand it back. + +DE: So, Iā€™ll take it from here. I want to go through the motivations for standardization. Um, so what do we mean by standardizing signals? Obviously weā€™re not, weā€™re not just asking that this signal library be included in browsers right now. This is the beginning of what will be a, um, an investigation that will take time. And weā€™re trying to model this snot off of the way that promises were standardized in N TC39. There was a standardize project to bring together the various libraries under a common interface and mostly common semantics and that had a very large influence on the promise standard of ES 6. We want to do something that is partly similar and partly different. Similarly we want this common approach, but differently weā€™re not focusing on finding common ergonomics or interface to be used on top. Ultimately the auto-tracking mechanism, the fact that when we within a computed you reference another signal, that reference itself puts it on the dependency list so that the dependency graph can be constructed. That core autotracking system and the way that signal graphs calculated this core semantics is the main part and ergonomics, there are many different components that people put on top of it and they can layer on top well. A big piece of motivation, maybe the biggest is, um, interreliable. Having one-way that this autotracking takes place. Autotracking is all about when you run a computed, you track, ultimately thereā€™s sort of a global variable that whenever you read another signal it gets on the list thatā€™s pointed to by the global variable and then when the compute ends, those are competed dependencies and this gets run every time, and this autotracking exists in multiple frameworks independently and the reason it exists is to drive the model where the view is a function of the model, or another way, the stuff thatā€™s in the DOM should be, um, based on the state in a way that observes from the scratch consistency. So, it should work the same in the initial render as it would in a subsequent render, but at the same time weā€™re trying to make incremental changes so that it goes faster. So, ultimately, having a common data model and a common set of semantics enables us to separate the model from the view and allows the encounter be in a separate model or package than the way they were integrated or view. One possibility is you could have multiple islands on a page and use different rendering frame works. You could have, hopefully, eventually you could have widgets embedded, but that adds another complexity that this proposal doesnā€™t solve by itself. The standards give a common reference point and standards for building this convergence. So, one practical issue why something built in would work better than something in JavaScript is because with npm libraries and packages, data version, duplications happen so the global variable maintenance is impractical. Next slide. + +DE: So, there are various strategies for implementation. Um, we hope that a native implementation might be faster. This is something that we want to experiment with. This isnā€™t the only motivation. This interoperability part is also important. Right now weā€™re collaborating on a polyfill of this proposal for exploration in JavaScript frameworks. So, I just want to note, native implementations are not magic. I donā€™t expect that theyā€™ll have significantly different algorithms that we can use, but thereā€™s nothing that I would expect GC or, you know, static analysis within the browser to do, um, it will just do the same sorts of things that we would do in JavaScript but there could be improvements and improved data structures. Next slide. + +DE: They could also play a part in HTML in the DOM. So, thereā€™s work needed. I think itā€™s generally ā€“ useful to have very commonly used libraries be included. You know, that ā€“ itā€™s possible to disagree with that. Signals have also been useful outside of UIs, and this dependability notion comes up. Overall the hope is that this will empower developers to focus on the really important parts, for example, frame works can focus on not this reactivity, and library authors can make these retargetable reactivity mechanisms and um, overall I think the structure of standards has been a great way to bring the community together, um, get everybody to work towards a common goal. Next slide. One possible place for collaboration could be in dev tools. So, different frame works have different tools that people can use to understand the state of their reactivity craft. So, maybe we could work together on a tool or maybe thereā€™s something in ā€“ that we could do in native dev tools? Thatā€™s a topic of investigation. Next slide. + +DE: Development plan. We have been just beginning to prototype things and our goal during stage 1 is to get significantly further in this. So, before asking for stage 2, we want to have significant real evidence that any of this makes sense. So, in addition to having correct fully functioning polyfills, which we can do because signals donā€™t require anything special in the engine and especially improves tests and documentation and currently weā€™re working into integration signals into frame works so that we can work out all of the problems with the API with expressiveness and performance problems and overhead thatā€™s created by the API itself and integrating it to various applications is great. So, the plan is to continue iterating based on experience, developing based on this API. Next slide. + +DE: Iā€™m going to emphasize the polyfill that we have the in the proposal repo is unstable and itā€™s not encouraged for production use. Itā€™s been useful to develop a polyfill, otherwise we ā€“ um, I mean weā€™ve already found certain mistakes based off of it, thereā€™s really no way we could figure out if this proposal works without trying it. Um, next slide. So, this proposal is based on collaboration with the community. The first of draft included work from, um, engineers working on many different frameworks and thatā€™s led to, thatā€™s enabled this high level of review that weā€™re optimistic that it could work ā€“ previous slide, for the underlying usage. Since releasing weā€™ve seen a lot of community interest. Weā€™ve a discord. People can file issues. And weā€™re planning on opening community calls to discuss this as a well. Next slide + +DE: Yeah, I want to emphasize that even though we have this first draft published, itā€™s very, very early and everythingā€™s up for discussion and change, just like any stage 1 proposal. Next slide + +DE: So, what do you think? Should this be stage1? Should we have it on the table for TC39 to consider for the future? + +JHD: Um, yea, I just wanted to clarify. You said this is impossible in NPM, and the way this is done in NPM is peer deps and you put everything in a packet that peer depends on and coordination is insured and. Itā€™s not ā€“ itā€™s not convenient, but itā€™s quite possible. + +DE: Oh, thanks. Thatā€™s a good idea. Iā€™ll try to have that be part of our prototypes because weā€™ll want the deduplication in that case. Thanks. + +WH: Thereā€™s some discussion about this on chat, but Iā€™d like to ask a clarifying question about how the thing knows what signals a computed signal might access. Does it just remember what it accessed last time and pretend those are the ones it depends on, or does it take the union over time? + +DE: Yea, thatā€™s a good question and I think in a future presentation we could go into the algorithms, which this is just the entry point and the algorithms for validating the graph for this, but for these signals and autotracking, they just record what was read last time. They donā€™t record the things that maybe were read previous times. Itā€™s something like a pure function ā€“ anyway, itā€™s only going to be invalidated if one of the things that was read in its previous execution changes. So, one of the paths that isnā€™t taken hasnā€™t changed in one of the signals that it reads, it wonā€™t invalidate. Thatā€™s on purpose. That causes less invalidation. + +WH: Where are those states stored? + +DE: Thereā€™s a global variable. The agent, I think, would have some slot that holds, which is the ā€œcurrent computedā€. That computed would be in this recomputing state, and it would append things to its list whenever a signal gets read and if thereā€™s a nested computed, it works stack-wise. If thereā€™s a nested computed, it will go to the one that is current and then go back to the outer one, once that ends. + +WH: Okay. + +DE: Does that algorithm make sense? + +WH: Yes. It was just a clarifying question. There are a lot more questions I could ask but Iā€™ll defer to the rest of the queue. + +JHD: There we go. Yea, so, I have a few things. I have two topics on the queue and I wanted to add something to the discussion with Waldemar. This sounds a lot to me with react hooks they basically made a rule thatā€™s enforced by linter that all hooks have to be unconditionally called in a componentā€™s rendering and this was so that they can do this kind of global state wrapping trick to figure out what hooks were called. Um, and I feel like in practice itā€™s, it allows things to be ergonomic in the seb that you donā€™t have to repeat yourself with what the dependencies are, but it also like causes a lot of confusion and problems and the like, ES lint roll for rules of looks that deals with specifying dependencies, for example, thereā€™s a lot of ecosystem complaints about it. Itā€™s not a blocker for stage 1, but Iā€™m generally concerned about trying to infer the dependence. + +YK: I think JHD, without getting too far into the weeds about it, those are very good concerns and I think we have strategies that avoid those problems, but I would really love any feedback about that concretely playing out those problems because itā€™s definitely on our radar. + +DE: Yeah, it would be good to dig into the comparison. I think itā€™s a different situation. + +JHD: And I definitely think thatā€™s during stage 2 discussion, I just wanted to bring it up. My first queue item I see that all of these produce class instances and thereā€™s a few reasons why I want to avoid that. Um, and I ā€“ completely setting aside my personal, um, esthetic around, um, programming styles. I think itā€™s really important to be able to separate capabilities here. So, for example, a signal has get and set, and I donā€™t want to ā€“ like Iā€™d have to bind methods if Iā€™m going to ā€“ like I think that was the error that they were references in the slide because without the binding it doesnā€™t work. + +YK: It wasnā€™t that, it was I didnā€™t pass the event. + +JHD: So, yeah but the error was ā€“ the method wasnā€™t bound and that would provide another error. + +DE: In this case [of the Vue code sample], the method was bound and the error would happen because you donā€™t but the parenthesis and Vue is going to send the event as the parameter and thatā€™s going to trip it up because increment took a default ā€“ + +DE: Your actual question, um, I think itā€™s ā€“ umā€“ + +JHD: I didnā€™t finish my actual question. + +DE: Oh, sorry. Keep going. + +JHD: So, it seems to me that it would be much simpler if the API call that produces a signal, letā€™s say, or that state, itā€™s just an object with a get and set function on it as opposed to being a class, that allows me to hand-off both capabilities or one capability to somebody else, and allows me to build a freeze mechanic where I wrap the set function and use a closure to deny them access to it, or reblase it with an op or something, when I want to deny them access. I feel that sort of design would be safer by default. Harder to mess up. + +DE: Yeah. + +JHD: And again, thatā€™s during stage 2 thing, I just wanted to ā€“ + +DE: Iā€™m very sympathetic to the capability separation argument and I think we might want to have a built in thing that has the capabilities preseparated, but at the same time, as you noted, it can be wrapped. We wanted to minimize the number of allocations to make sure it was practical. Do you have thoughts? + +JRA: I think you covered that. I just wanted to figure out that we are working with frame works where the separation of capabilities is incredibly important, so weā€™re going to verify that our design actually makes sense for them without placing like an undue burden. + +YK: I want to say, [ Indiscernible ] design, and then the design do the things youā€™re saying, JHD and I ā€“ I think itā€™s, the stage 2 things, but itā€™s definitely thing a lot of people into it. + +DE: The thing is particular is if we separated the capabilities, they would still have to wrap each one [causing additional allocations even though capabilities were already separated in practice] so thatā€™s the kind of the thing weā€™re trying to work through. + +CDA: Sorry, Iā€™m going to jump in here. We have less than 5 minutes. Thereā€™s a sizable queue and I believe you still want to ask for stage 1, so ā€“ do you want to ā€“ do you want to take a look at the queue and pick anything out there that you want to talk about? + +DE: Um, okay, letā€™s see if anything in the queue is a potential, um ā€“ I mean, can I ask the people on the queue or people not on the queue if they want to raise any of these concerns, especially high priorities before we consider stage 1? + +JHD: Mine isnā€™t applying to stage 1, but it might block stage 2 and I wanted to deliver that feedback in plenary. + +RBN: I was going to say that I also have something that may not block stage 1, but it may give others pause and cause them to be a potential block that I would like to discuss. + +MAH: Same, weā€™re supportive of stage 1, but we have concerns with the global mutable state that is observable currently and thatā€™s definitely a stage 2 blocker and we would like to invite the champions to discuss this in the minutes remaining. + +DE: Okay. Can we like, call for consensus on stage 1 and then continue going through the queue? Would that be okay with people? + +DE: I want to understand if thereā€™s any objections. + +RBN: I do want to bring up ā€“ I can briefly describe my concern before we talk about stage 1 advancement. I already posted an issue on the issue tracker, itā€™s issue 111. I have a concern that weā€™re putting the cart before the horse and looking at signals and watcher before we look at events and observers. We have the DOM has events. It has edit listen and raise events. No JS has event and emitter and we have already things that are event-like. Most of these things tie into watcher more than signal, but we have these mechanisms for registering call backs and possibly canceling the call backs and signal watcher and all of this is adding yet another mechanism of doing something without this core mechanism that I think we need. Even if itā€™s not a built in feature of the language but more of a protocol that we can widen out to the DOM and node JS to consistently handle the events and I feel like weā€™re pushing for a feature that is much more higher level than a building block that we donā€™t have in the language but obviously is needed because itā€™s used literally in every host environment but in a different way. So, I want to make sure weā€™re thinking about that and I think we need a solution for this before it gets to stage 2 because it might have a dramatic impact ā€“ at least before it gets to stage 2 because it might have an impact on design for stage 2 API tracker. + +DE: Great. I agree that this is about Signal.subtle.Watcher and not Signal.Computed, which is pull-based. And I agree that we can keep iterating on this API. It would be great to enhance Watcher with those mechanisms and weā€™ll see how the standards environment proceeds during that time. As you mentioned, several other things were not held back because of the lack of unification, I donā€™t see why this one should be. I responded to your issue with a number of requirements that are slightly unique about Watcher for an API; weā€™ll see whether a unified Observable/Event API meets those requirements. + +RBN: Just to add that, why it doesnā€™t need to be held back why we have this discontinuity is keep building different ways of doing the same thing, and this discontinuity needs to converge somewhere and I think thatā€™s why this should be held from stage 2 because we need a consistent way of addressing this, especially if weā€™re talking about a core feature that any of the these systems could use for any reason. + +YK: I think this makes sense and I replied to you on the comment thread and I would love to continue there. + +RBN: Thank you. + +CDA: All right, do we have support for stage 1? Iā€™m leaving the queue intact deliberately. + +SYG: Can I do my item real quick? I have a question for the champions. Iā€™m not going to block stage 1, but I want to double check something. + +SYG: So, standardizing something, even stage 1 for folks unfamiliar with TC39 they may see it as a signal, what platform and JS, we ought to remain somewhat impartial to things built on top of us. I wanted to double check that you talked with people, that they would not take stage 1 that the platform that something that react is not going to do and try to make them less competitive or something. + +DE: Itā€™s definitely not about making anyone less competitive. Weā€™ve been talking with the React team, you would have to ask them their thoughts. I donā€™t want to misrepresent them. There are various possible integration points with Reactā€™s current model and itā€™s not necessarily competition in this case. + +YK: They seem interested to help us work through, right? + +DE: Yeah, theyā€™ve been open to communication and they will be able to state their own position. + +JRA: And we got some engagement from the React community, like recently I saw an implementation of a use signal by the author of Jotai that is a Signals-based React library. We would definitely benefit more from engaging more with the core team and we remain open to doing that. + +CDA: Okay, weā€™re past time. Once again, looking for, um, support for stage 1. + +DE: Does anybody think stage 1 is a good idea? + +CDA: We have a plus 1 for stage 1 from DRR. + +MM: Yea, we support stage 1 and a although weā€™re, weā€™ve got a lot of issues to be worked out. + +CDA: Plus 1 for support from MM. Plus 1 as well from MF. Plus 1 from EAO. Plus 1 from SRV. Do we have any objections to stage 1? + +CDA: Seeing nothing, hearing nothing, signals has stage 1. + +CDA: We are past time. If some folks would like to continue discussion, I think we can stay on for a little bit longer or maybe go through the items that are in the queue. + +> no transcriptionist below + +LJH: We should take note of the lessons from fetch(), intended to be a low-level primitive, and so many users used it directly to the point that WinterGC had to spec `fetch()` differently due to impossibility of implementing fetch() on the server + +DE: Yes, it would be great if we can improve the ergonomics of direct use; I think Signal.State and Signal.Computed arenā€™t so bad, but still, itā€™s challenging and complex to use the Signal.subtle features. We want this signals to be usable in reality, and that means hooking into some relatively complicated/custom framework code. + +YK: .... There is a plausible answer for DOM to provided a renderer, but we sliced things because there was not a good single answer. If we shipped something minimal, that might have the same problem in a different form that LJH mentioned, but where people end up using the primitive directly instead of what people want anyway. + +JRA: Angular is experimenting with Signals over the next few months, and we will be able to report back with details. + +YK: Same with Ember + +MAH: We have concerns with global mutable state. No problems with global mutable state as long as it's not observable by a user's code. Async context took great efforts to ensure that while there was global state, there was (mostly?) no way to observe this state. This proposal does provide capabilities to witness these mechanisms. We basically have concerns with global mutable state that's observable, and encourage exploring if the mechanisms are necessary and why. + +DE: `currentComputed` is a lot; youā€™re right. Weā€™ll look into making that more limited. Regarding the `notify` state, we should follow up on why that is a problemā€“it seems like that shouldnā€™t leak too much information. Iā€™ll follow up in SES calls.. + +YK: The version of Signals I implemented before this effort considered these concerns. We're confident in the high-level APIs where we have enthusiasm. Low-level APIs we expect to change more. + +MAH: Looking forward to these discussions. + +DE: Sounds like there are some concerns around lower-level APIs and mutability. + +WH: RBN had concerns around events + +DE: Events/Observables might be a way of modeling Signal.subtle.Watcher, which does have a callback, but there are some specific performance-related requirements which might not be met by other APIs. Ultimately not sure that this proposal should be held back by that; your application code should generally not be trying to take subscriptions to signals; this Watcher API is more for frameworks to use. Note that Signal.Computed is not an example of something that makes sense to model with events or observables, as it is lazy/pull-based, whereas events and observables are eager and push-based. + +JRA: Not including an effects API was one concern in the community, but this was a deliberate decision + +DE: We'll be having calls to discuss all of these issues externally. Please, if you're interested join the Discord, comment on issues [on GitHub], whatever works for you! + +### Speaker's Summary of Key Points + +- Signals are a possible addition to the JavaScript standard library to efficiently and consistently handle reactivity/state management. They form a mutable, automatically tracked data dependency graph. +- A minimal signal API Is proposed based around State and Computed signals, with some advanced ā€œsubtleā€ features (the latter in a more questionable state). +- The effort here has been based on the collaboration of several frameworks, who are working to validate that this is a possible common model to place underneath their existing systems (in branches), based on a polyfill which is not suitable for production. +- Some concerns raised by the committee: + - RBN asked about the relationship with the Events/Observer pattern. Computed signals are not correctly modelled by events/observers (which are push-based, whereas computeds are pull-based), but Signal.subtle.Watcher might be a candidate. Discussion continues at https://github.com/tc39/proposal-signals/issues/111 + - JHD asked about the relationship to the React ā€œrules of hooksā€. Signal APIs donā€™t have quite the same constraints, but the champions will follow up with JHD offline to understand concerns better. JHD also asked about capability separation for reading and writing, which is being discussed in https://github.com/tc39/proposal-signals/issues/94 and https://github.com/tc39/proposal-signals/issues/124 + - MAH asked whether certain powerful APIs such as Signal.subtle.currentComputed (which may represent an unintended communication channel) are needed. The signal champions will follow up on this concern during regular SES calls; it is likely that the subtle APIs can be modified. + +### Conclusion + +- Signals reach Stage 1 +- Development of the polyfill and integrations into new and existing frameworks will continue, coordinated through an official public Matrix channel and regular public meetings on the TC39 calendar.