From b212e97bae4366f80f7a9a42b332ebace1c0ef24 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Thu, 15 Aug 2024 19:03:24 -0700 Subject: [PATCH] July meeting notes --- meetings/2024-07/july-29.md | 1073 +++++++++++++++++++++++++++++++++ meetings/2024-07/july-30.md | 827 ++++++++++++++++++++++++++ meetings/2024-07/july-31.md | 1113 +++++++++++++++++++++++++++++++++++ 3 files changed, 3013 insertions(+) create mode 100644 meetings/2024-07/july-29.md create mode 100644 meetings/2024-07/july-30.md create mode 100644 meetings/2024-07/july-31.md diff --git a/meetings/2024-07/july-29.md b/meetings/2024-07/july-29.md new file mode 100644 index 00000000..7a1ed179 --- /dev/null +++ b/meetings/2024-07/july-29.md @@ -0,0 +1,1073 @@ +# 103rd TC39 Meeting | 29th July 2024 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|------------------| +| Chris de Almeida | CDA | IBM | +| Ujjwal Sharma | USA | Igalia | +| Waldemar Horwat | WH | Invited Expert | +| Ben Allen | BAN | Igalia | +| Jesse Alama | JMN | Igalia | +| Linus Groh | LGH | Bloomberg | +| Ron Buckton | RBN | Microsoft | +| Daniel Minor | DLM | Mozilla | +| Chip Morningstar | CM | Consensys | +| Philip Chimento | PFC | Igalia | +| Michael Saboff | MLS | Apple | +| Mikhail Barash | MBH | Uni. Bergen | +| Samina Husain | SHN | Ecma | +| Keith Miller | KM | Apple | +| Richard Gibson | RGN | Agoric | +| Justin Ridgewell | JRL | Google | +| Aki Braun | AKI | Ecma Secretariat | +| Jordan Harband | JHD | HeroDevs | +| Istvan Sebestyen | IS | Ecma | +| Dan Gohman | DGN | Invited Expert | +| Josh Blaney | JPB | Apple | +| Dmitry Makhnev | DJM | JetBrains | +| Chengzhong Wu | CZW | Bloomberg | +| Ashley Claymore | ACE | Bloomberg | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +RPR: welcome everyone to the 103rd meeting. We are here I believe in Los Angeles today or pretend it is Los Angeles well at least the West Coast at least, and close to home for many people here. All right, I will say looking at the people we have in attendance, I think most people know who we are but I am Rob one of your chair group here and as well as Uijwal and Chris, we are your cochairs, and we also have some facilitators, and Justin [INAUDIBLE]. We have two our facilitators who are not with us. But they are part of our facilitator group. Hopefully, if you are here, and you are dialed in through WebEx today, you will have reached here via the entry form. If you have entered the link in any other means and go back to the reflector invite and sign in. And so it is utmost importance to record all entries. So if you have received URL vast other means tell the person who is distributing it, do not distribute it in the future. And we have a Code of Conduct which you can find on our main web page. And please do your best to respect the Code of Conduct in the spirit in which it is written and not just the letter of law. And we should all try to be excellent to each other, if anything happens that makes you uncomfortable, you will report, and reporting is anonymous. + +RPR: okay. That is fine my connection is currently on full to 4G as I can see. I am on 4G. We begin two hour if the morning for a break and two hours in the afternoon. Or evening, and we have the most important which is TCQ, and so, you will find the link on the reflector invite. You can view the queue of all items and click on the artist to get details and you will see current agenda item, and there are some buttons to help you discuss and enter the conversation. Those buttons include and you should be using these generally further left most button, and we are first starting new topics as that keeps the conversation spread out distributed and you can organize it and if you want to reply to the current topic, the lighter blue is your color, second button. And if you just want to briefly interrupt between briefly get to the top, you can ask a clarifying question and if you really need to interrupt and if there is some kind of – if you cannot hear me, you can raise a points of order. And if you are using – you are holding the mic and if you are speaking, and you will get this button, “You am done speaking” please use this button it and helps the chairs when you actively use your time by clicking this button. We also have four text-based communication, we have matrix, and most people are already in there. And if you are not, you can consult the admin and business to get yourself there and message one of the chairs, and if you have any trouble + +## Secretary's Report Presenter: Samina Hussain (SHN) + +- no proposal +- [slides](https://github.com/tc39/agendas/blob/275aa87a6f34a8e11a72ca1efee47d7747cad97a/2024/tc39-2024-036.pdf) + +SHN: Thank you. welcome everybody for the TC39 meeting and as always, it is a pleasure to be here. As you see, it is myself and Aki Rose who are the secretaries for TC39. I would like to thank István for supporting TC39 for many years, and I want to thank you for the continued guidance that you are offering not only to me but also TG5. + +SHN: Thank you to the notetakers. I also want to recognize Aki, who has been supporting me as the TC39 co-secretary. I had not made a slide on this for the previous meeting and I want everybody to be clearly aware of Aki’s role. I also thank Aki for the efforts already and the efforts to help investigate and find a solution and implement what was really needed to be done for the ES documents in PDF form and that is one step of many you will be supporting us, so thank you. + +SHN: just a couple of things I want to cover today and as usual, from the secretariat. We will talk about some approval and projects and new things that are happening at the GA and with the annex slides you have the Code of Conduct, and some of the next meetings that we have for the general assembly and ExeCom. The highlight is the executive meeting coming up in October has shifted to the 22nd and 23rd of October. + +SHN: First of all congratulations on the approval of the two standards, at the last general assembly both ECMA262, and ECMA402 both were approved, so thank you for your efforts on making that happen and this is active committee and it is great that we have the next edition and it has been noted on the website and added on the news, so if you have anything missing regarding these two please let me know. And I think we got it all highlighted on the ECMA website, so thank you for these efforts. And I also want to highlight another new standard which may be of some interest and some from TC39 members that would like to participate in, as you may remember in previous discussions we have talked about CycloneDX Bill of Materials and this comes through a membership of OWASP, so we have the ECMA424 1st edition. And so and so I looked through at the specification that this coarse and all of the works that has been is first minuted through YouTube and agenda is open on GitHub, so if you like to look at that and make extent that is great and the meetings cycle for the next work and the next edition has become the 25th of July last week was our first meeting and in addition to this technical committee, we have two technical task groups on transparency exchange API and package URL. And both of active efficiently and their scope and charter are published and look at that on our website and look at the page ahead of you and if you wish to participate, please just reach out. It is a very active group. + +SHN: We have the new proposal TC55 and so this is runtime, and this is in conjunction with working with W3C and this is up for vote and we have discussed this throughout this year for couple of different executive meeting and there has been a number of discussions regarding the process, objectives and the scope of work, and it has been quite fine-tuned and DE has been drawing this together with LCA others. I have received some issues regarding the TC and as we proceed with this particular new proposal we are covering the correct group, and making sure that we don’t have any risks and a few comments have come up and they will be worked up and I will highlight that to the committee that we have this potential new work proposal. We have some new members that has been approved, and of course have been provisional members earlier in the year and many of them were already active always executive invited experts and I think it has worked out very well, and thank you for the membership and so we welcome these four new members: Replay.io, HeroDevs, Functional Software, and the Open Source Business Alliance to ECMA and TC39. + +SHN: At our general assembly we had June 26/27 and we approved, and we do as usual a complete presentation from chairs on their technical committees and with some highlights are and where the direction is going. And we had two members of TC39 present at the last GA in Geneva and. SFC was there and presented on TG2, and both received very positive feedback and it was quite interesting to see some of the questions that came up from the general assembly attendees and the report that comes from the technical chair could add a bit more detail on what the TG’s are doing because it is of t of the GA and to the Executive committee and so this is some positive information and feedback and if we can add more information on the chair works and future works of the TG’s that would be important for the members and those to know what is going on in the technical committees. + +SHN: And that is the extent of my presentation, and it was quite short today and it was specific to bring you the feedback on the positive approval on the standards and the work moving forward with TC54 , and highlight to all members that we have a new proposal TC55 and so I will move to the annex and what we have choose approve of Aki andI working on that to getting approval on licensing and I do not have approval for today, and invite all of invited experts that are on the call and their support and the ones that are members of organizations eventually could consider to become a member of ECMA that is always appreciated. The Code of Conduct already mentioned by RPR on his presentation, some reason and conclusions very important thank you very much for the last meeting, we had quite a few, and I hope we can continue that and it makes for a strong commitment to the committee. And last but not least and reminder of all the list of documents and I have listed them there, and you may be able to access them through your TC chairs and some documents and general assembly and so the meeting that is taking place and Tokyo will be the next one in-person and the general assembly dates, and committee dates and know that the meeting date for the executive committee Haitian been pushed off and if there is discussions with approval, and I would like to keep the chairs in mind I will give you a heads-up when we are close to 60 day mark to the GA and if you need to bring anything up, you may do so. Good I think that is my last slide and thank you, and I will stop sharing and I am open to any questions or comments if there is any on the queue? + +RPR: Thank you SHN. Very good report. Any comments or questions on this? CDA? + +CDA: Can you talk more about the concerns were about the IPR policy? For the proposed TC. + +SHN: For proposed TC55 the concern is on the IPR policy and that the work is being done predominantly in a group with W3C and that is with ECMA and that it is IPR’s that are well aligned with the W3c working group which is not fully charted by WC3 and that the policy part of ECMA is covered and any historical work and make sure that the policies are Word ever covered. This is not a showstopper what it requires is a clear verbiage in our proposal and our verbiage was not clear enough, and that is why the concerns have come up. And I need to look at it further and the concerns did come up later Friday night and I have just received some feedback that will allow me some time to go through that, but our intentions that it is IPR’s are done that the ECMA IPR policies are governed. CDA that is best I can answer at this point in time. + +CDA: Thanks. + +SHN: If there are no further questions, I do have one comments, and I apologize, Aki is there any comments you would like for the presentation, please take the time to do that now. + +CDA: Thank you, sorry Rob, I just assumed there was no more queues since I heard nothing. + +RPR: You are correct. + +RPR: We will work diligent on the key points and the conclusions and anyone here is free to intervene, and decide if we forget ourselves. And stop to write this down. + +SHN: Thank you very much and for the TC55 if anybody has any further comments, feel free to share with me on email if you are not a voting member, I am open to hear any voice regarding these concerns. Thank you. + +### Summary of the Secretary's Report + +The Secretary's Report, presented by Samina Husain, covers several key updates and developments related to various technical committees and ongoing projects. The summary encapsulates the key updates and issues discussed in the report, including acknowledgments, updates on standards and committee work, new memberships, IPR policy concerns, and upcoming meetings. + +Below are the main points: + +#### Acknowledgments + +- Expressed gratitude to István for his long-term support of TC39 and acknowledged +- Aki Rose for her role as the TC39 co-secretary. + +### General Assembly and Executive Committee Updates + +- The General Assembly (GA) and Executive Committee meetings were highlighted, with changes noted for the upcoming Executive Committee meeting, now scheduled for October 22-23. +- The committee was congratulated on the approval of two standards, ECMA262 15th edition and ECMA4 402 11th edition, which have been published on the ECMA website. + +#### New and Ongoing Projects + +- A new standard related to CycloneDX Bill of Materials was noted, and TC54 is active with meetings and task groups already established. +- A new proposal for TC55, WinterCG, collaboration with W3C, was discussed. The proposal has undergone significant review and refinement, and a vote on this proposal is upcoming. +- Concerns raised about the Intellectual Property Rights (IPR) policy related to the proposed TC55 were noted and under review. + +#### New Memberships + +- Four new members, Replay.io, HeroDevs, Sentry and JetBrains have been approved to join ECMA and will be participating in TC39. + +#### Summary and Conclusion + +A reminder for the technical notes was provided and encouraged to have a format which ensures that anyone reading the summary and conclusion can quickly understand the discussion, main points, and resulting actions. Summary captures the objective or main topic of the discussion. Conclusion includes the agreements, resolutions and next steps. + +#### Open Discussion + +Further comments or questions were invited regarding the TC55 proposal and other topics discussed in the report. + +## ECMA-262 Status Updates Presenter: Shu-yu Guo (SYG) + +- no proposal +- no slides + +SYG: Meeting is normative change consensus a while back, where the async generator was not handling promises that can be broken in a way where you like add a sneaky getter on some properties. I forget which one, I think return or something like that. But this got consensus and it shipped in chrome and Safari, and I don’t know about I FireFox implementation status but any case it got two implementations and it was merged. And there is a fix. + +SYG: There is another normative fix that is not yet merged but there is no additional presentation in this meeting about it and there is an obvious spec bug in the modular loading machinery with a module graph with weight. And module graph there is possibility for spec algorithm to spec, and there is a fix and the PR is ready to go and there is FYI for folks who care about the space to take a look because it is an obvious spec bug following precedent that we had from previous spec bug issues and we are not asking for consensus here and this is not a thing that we have a choice about. If you do care, please read it and if there is no objections and we will try to merge this by the end of this – it says the end of the day but the end of the meeting is more reasonable. + +SYG: And so noticeable editorial changes since the last meeting: and so the only one to call out is that the @@ notation has changed to the percent sign notation to align with other intrinsics and the @ this is generally less understandable and reactants. So for proposal office so prefer percent sign notation. This list is mostly the same as before, but just to show it again here. And that is it. Any questions? + +### Conclusion + +Please look at PR number 3357 for folks who are well versed in the module machinery to look at the spec fix there. + +## ECMA-402 Status Updates + +Presenter: Ben Allen (BAN) + +- no proposal +- [slides](https://notes.igalia.com/p/durationformat-pr-207#/) + +BAN: The okay looks like we are visible. So, this before I get going on this, there are two normative changes that are up for discussion at this meeting. And they are small in terms of text but they are large in terms of potential need for debate and discussion. The first is we discovered—I forget who discovered this, apologies—but the way ECMA-262 and ECMA-402 validate ranks is inconsistent. 402 when it is determining for example the number of integers to include. And it takes those values as numbers, and it tests whether they are in ranks and truncates them and if they are out of range, and to 226 will truncate and then range. It is a small inconsistency, and in the PR that we discuss for 402 to match 262 but maybe that is not the right way around. + +BAN: The other one that is if you look at the comment thread on the associated issue it is very, very long. But so currently no browser implementation allows for dynamic updates to locales and so JS has done this for a some time. And if you allow this in the context of browser specifically, it becomes a new finger printing service if those updates are observable and there is a tremendous amount of discussion on the issue that has been going on for months and months. So this might require a fairly large amount of discussion. In terms of editorial changes, what we have is a bit of matching 262 and a bit of matching temporal and we have these format by taking the time nanoseconds and this makes format pattern infallible and so they are moving to the common AL. And we have released use of the take macro which we use in one place, and in that one place describing the operates is clear. And this reflects changes to be made in 262, and there is once GPR for this lens type will become a full AL instead of macro. + +BAN: And so we moved away from the @@ notation to match 262, and I believe it was in work for over five years. All right, that is it for the 402 editor’s update. + +### Conclusion + +402 has been updated to match 262 temporal in several ways. + +## ECMA-402 editors appointment + +Presenter: Chris de Almeida (CDL) + +- no proposal +- no slides + +Private discussion + +Addition of BAN to editors group by acclamation. + +BAN: I am back. + +CDA: Welcome Ben, you are now one of the editors of the ECMA402 specification. + +BAN: Thank you so much! It reminds me of when you got a PhD and you step outside of the room for the committee to deliberate. + +CDA: Unfortunately, I cannot open the door and say come back, Doctor. + +BAN: All right, but nevertheless thank you so much. + +## ECMA-404 Status Updates + +Presenter: Chip Morningstar (CM) + +- no proposal +- no slides + +CM: So every plenary I get up and have to find a way to declare the eternal immutability of JSON and the stability and compatibility of the ECMA-404 spec. This time it is exactly the same. + +RPR: one day CM, one day. Thank you very much. All right moving on. We have test 262 status updates with PFC. + +## Test-262 Status Updates + +Presenter: Philip Chimento (PFC) + +- no proposal +- no slides + +Prepared statement: + +- `RegExp.escape`, `Atomics.pause`, `Math.sumPrecise`, base64, source phase imports now have full or almost-full test coverage. Thanks to the proposal champions for pitching in with tests. +- Large PRs: we are working through resizable ArrayBuffer, landing it piece by piece, and have started to tackle explicit resource management. +- We'll continue to work towards the goal of having a testing plan for each proposal, to make it easier for people to write tests. +- Igalia's grant is ending in September or October and is not renewable. We are working on applying for other grants so that we can keep the current level of staffing. + +PFC: Short status update for test262 and I don’t have slides but I will paste the points into the notes afterward. Um, so happy to report that regex pause and Base64 have almost full test coverage test to the proposal Champions and everyone else who pitched in with the tests. + +PFC: You may know that we have some very large requests pending in test262 which we are steadily working through and currently working through array buffer piece by piece and we have tackled explicit resource management and we will continue to work towards the goal of having a testing plan for each proposal to make it easier for people to write tests and to know when a proposal is fully covered and then some less good news, the brand that is funding Igalia in test 262 is ending in September or October and they have a policy – we are working to other grant to obtain the current level of staffing on test262 and if you have any information on that topic, let me know? + +RPR: any questions for Phil on test 2 62? No questions on the queue and I will put myself on there and is there a ballpark minimum amount that helps with test262 funding and guidance on what people will and their companies to look into? + +PFC: I did cover that at one point, but I don’t have the figure off the top of my head, I will take a look and get back. + +RPR: okay, thank you. All right there is still nothing in the queue, so that is all. So, thank you, thank you PFC. + +## TG3: Security + +Presenter: Chris de Almeida (CDA) + +- no proposal +- no slides + +CDA: Sure, the brief update TG3 meetings are weekly and content has focused lately pretty much exclusively on discussing security impact of proposals that are in various stage. So, that is pretty much it. Please join us at TG3 if you are interested in security and specifically security on proposal and there is another item we have discussed recently but it has its own topic here that’s scheduled for tomorrow if I am not mistaken. So, we will wait until that time to discuss that item. Thank you. + +RPR: Thank you Chris. All right, continuing on wards. We have TG4. John. + +## TG4: Source Maps + +Presenter: Jon Kuperman (JKP) + +- no proposal +- no slides + +JKP: So I am going to share my screen. So I am representing TG4 resource test group and with last month we had our second hackathon and with our friends at Google Cloud Platform expel we had friends from Google, Igalia, and Mozilla, probably missing some others but we had a good turnout and we spent two full days basically working together on some of the these proposals and getting some demos working and going through specification and working out all things source maps. So the scopes proposal that I am proposing is biggest new work which is essentially adding to the specification in a special way to get variable names and functional names to get persist through source maps and what compilers use, and so you can show there is a nonfunction. And we got a full working and prototype of the script proposal which is nice, and we build it both in the authoring side where web tooling tools will add these new fields to source map and the reading side work would build these out the source map and use them to change the panel. + +JKP: And we went through the existing specifications in hones of getting it ready for the October plenary which we will seeking consensus on it and we started to exploring new spaces which is exciting and so if you are interested in talking about them. And these are stage are even map stage. And the idea of embedding dependency information in source maps and ways we can improve Wasm support and showing some entities better and some debugging API and you can manipulate Wasm and so stress mode and con contingent subsequent on the new feature. We got a demo working with webpack, Babel, and gen mapping and so for the dependency map implementation and this would allow web tools and this will give you suggestions for tree shaking and for Wasm debugging and we had a big meeting with Mozilla, and Google folks prior before and we hope that it will lead to qualifying and there is underspecified API’s and that we can better control your language and how it deplays in dev tools, and so these are debugging or expression languages. And then the strict validation would be discussion if there is any way that we can kind of tighten up the existing spec which has been often underspecified and like what happens if a field is missing or if there is errors, and all of tools try to fail as gracefully as possible, and so validate the field and so we had discussion about these new fields that can correspond to stricter validation, and so the last thing + +JKP: I will cover is testing repository which is supported by Igalia, and so we have the entire specification done in unit test and there is additional areas that need to be flushed out but it is near completion, and it has already been integrated into Mozilla search library and that is dev tools, and I think that is being integrated into Chrome. So now our work has continued to add test coverage for the upcoming Scopes proposal. Cool and last thing kind of a heads up on these that we are planning on coming to the October plenary with the clean up specification and hop hoping proposal for TG4 and we will make sure that is centered around that proposal, and thank you so much. + +## TG4 convenors appointment + +Presenter: Chris de Almeida (CDA ) + +- no proposal +- no slides + +Private discussion + +CDA: Welcome back convener NRO, you have been elected by acclamation. + +## TG5: Experiments in Programming Language Standardization + +Presenter: Mikhail Barash (MBH) + +- no proposal +- no slides + +MBH: Hello, so just a brief update this time. So, we still have our meetings every month. We will arrange the TG5 Workshop co-located with the plenary in Tokyo, this will be on Friday the 11th of October, the first half of the day. It will be hosted by the computing software group at the University of Tokyo. So we will talk about formalization of the grammar models that are used in the ECMA262 spec. This is a topic relevant both to that research group, and to the TC39 committee. And also, KAIST promised to talk about their current work in Wasm SpecTec. So everyone is welcome to attend the workshop, and the registration link is in the Reflector post with the information about the next plenary in Tokyo. So yes, are there any questions? + +RPR: there is no one in the queue. So if there is any questions, and I will say that thank you for advertising this. Last time we had on the reflector and not everyone saw this, and thank you for this and you can all attend if you wish. On the Tokyo trip. All right we are doing well on the agenda. And onward we go to updates from the code of conduct committee, Chris? + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +- no proposal +- no slides + +CDA: The conduct goes well for the most part. We have not had any new reports and nothing notable in our spaces and discourse or GitHub. + +CDA: We have not – we prefer not to be busy on the Code of Conduct committee, as always, I will repeat my pitch for folks who would like to join the Code of Conduct committee and please reach out to one of us on the Code of Conduct committee. The main ask is that you are available for a period of one hour every two weeks. Often we do not have to meet during that hour because we have nothing to talk about but it is important that it is blocked out on your calendar so we can function as a committee when we need to. So thank you. + +RPR: all right thanks Chris. We have a cool collection there for anyone who wants to be a volunteer. And please consider it. Next up, we have – I clicked the button. Ben Allen with Intl.DurationFormat display negative sign on + +## Intl.DurationFormat: Display negative sign on leading numeric-style zeroes + +Presenter: Ben Allen (BAN) + +- no proposal +- no slides + +BAN: So there is one small number that changed is essentially about WebEx and I wanted to bring it here of 0 abundance of question. So one thing with duration format is that we have two fundamentally different ways of displaying durations, and one in pro’s, and one in sort of this style digital clock and a couple of meetings ago we changed how we handled negative signs such that negative signs or negative durations is only displayed on the largest unit and we have a embarrassing bug and that embarrassing bug is that negative sign was dropped in the following cases when the duration is negative, and digital clock style is requested and the person displayed is hours and minutes and that value is 0 and so I don’t know if that is large enough to be legible but embarrassing dropped that negative sign in that particular combination of cases. There is a PR up to fix that, and like I said, out of abundance of question, I would like to ask the group for consensus to change the current debugging verse where this that specific case we drop negative signs to on that displace the current negative sign. + +RPR: before we go to that call for consensus, is there any clarifying questions? + +WH: Just curious what happens if the signs of duration components are heterogeneous? + +BAN: Sorry, those are rejected, since the start of duration format mixed signs durations are not allowed. + +WH: Okay thank you. + +CDA: I cannot read the example. + +BAN: My apologies, and how about if I copy it into matrix, will that be fair? + +````javascript +new Intl.DurationFormat('en', {hours: "numeric"}).format({minutes: -1, seconds: -2}); +// currently: 0:01:02 +// should be -0:01:02 +```` + +BAN: I will call for consensus. + +RPR: support on the queue for DLM. Any other, is there any more support or any other objections? All right no objections and so congratulations and this has consensus. + +BAN: Fantastic thank you very much. + +RPR: With the emojis on the call, helps make it a real life meeting. Next up we have Luca with source phase imports update. + +### Conclusion + +- Consensus to fix `Intl.DurationFormat` spec bug that drops negative sign erroneously + +## Source phase imports update + +Presenter: Luca Cassonato (LCA) + +- no proposal +- no slides + +LCA: Very quick update today. I just wanted to quickly update on what we have been up to. Though as you may remember, the main – I’m not going to do a full recap of source update, it’s only Stage 3. The main thing here was to officially support Wasm on the web and particularly this import source which would give you a simply dot module to simply dot instance to create a new instance for it and you need this to pass additional options to the constructor that you could not do if you’re doing this up here. That’s what we’re going for. + +LCA: Wasm imports are now approved Wasm Stage 3 and reached Stage 3 in April. And the HTML integration is actually well under way and it’s complete and it’s been reviewed by Mozilla and Apple an waiting on the final approval of V8/chromium. If anyone hear from V8 would like to weigh in, that would be great. The quick recap of the integration is there’s no import attribute required because we consider Wasm to be the same privilege level as Java script. Wasm can import other Java script and it doesn’t have import attributes to import on the web. Those in the know of the JS string built in for Wasm for defaults and able to import JS string of imports if you do the Wasm import. + +SYG: Just wanted to make sure did you mean phase 3 in Wasm phase 3? + +LCA: Yes. + +LCA: Okay. That’s the update. I don’t know if there’s any questions to answer. Otherwise, excited to hopefully get this by next – one of the next weekends. I will not say next weekend. + +RPR: All right. Any more questions or comments for Luca? No, all right, thank you. We’re doing very well on time. So I think we’ve slightly reordered the agenda now to bring forward larger item. Chris, are we supposed to be going to Ben? TCQ has not taken me there. + +## Normative Conventions: pretend primitives aren't iterable + +Presenter: Michael Ficarra (MF) + +- no proposal +- [slides](https://github.com/tc39/how-we-work/pull/152) + +MF: So this proposal is for an addition of a normative convention in how-we-work. This is a document that is non-binding but generally guides our design around various aspects of the language based on what we have learned over many years. So this particular addition is a continuation of a discussion that we had at the last plenary when we were discussing I believe it was joint iteration and talking about iterating strings and we seemed to have full agreement from everybody in the room that we just never want to accept string primitives in positions where we are expecting iterators or iterables and codifies that and extends it to all primitives. Strings have a built-in Symbol.iterator and someone can make Numbers iterable if they added Symbol.iterator to Number.prototype. Those positions where we expect an iterable, we will reject those primitives. + +MF: So I don’t know, I could read out the full text of it. But basically the first paragraph is saying what I just said there and the second paragraph is giving some context that we now understand that strings should not have been default iterable because there are many different abstractions that the string can be providing. We don’t really know what you actually want so that gives some context so that people understand our reasoning. And we note that it is new just like all of the other normative conventions. So happy to have any discussion or clarifying questions. We can go to the queue now. + +SYG: I support to make strings not iterable in new APIs but we can’t take back they are iterable and any thoughts on education on – because there are new APIs that don’t accept it. From a less advanced programmer’s point of view it may seem like iterables are inconsistent. + +MF: Yes, I mean, that is the problem with a lot of these normative conventions. Where we no longer do coercing in certain areas, it looks like the built-ins are inconsistent about whether you are supposed to pass an undefined value to things that accept objects or something like that. With all these normative conventions – we don’t document the kind of things that we already follow. We document things that are a break from the norm and I think that risk exists for all of them. Hopefully the community also uses these normative conventions as a guide so that the language going forward and the ecosystem going forward kind of align. But I don’t have a better answer than that for you. There will be inconsistency in the language for any of the things that we have in this document. + +SYG: But it seems like the set of API Is that will accept strings as iterables that will monger grow if we follow this and at least enumerate these and also send the memo out to other web API and HTML and stuff like that? + +MF: Yeah, I could try to enumerate the set of – it wouldn’t be just like built-ins, it would also be syntax like for-of. It will not accept a string. I can try to enumerate all of the positions in the language that would still accept primitives and we could include that in the document as well if you would like. + +SYG: Yeah, that would be good, thanks. + +MF: Sure. I can update it like that. + +DLM: We support this and quite happy to see more design principles being made. + +MF: I didn’t quite hear the last part of it. + +DLM: I notice this is ongoing process. We’re happy to see things being written down. + +MF: Yeah, me too. + +LCA: Just wanted to talk real quick about something that we were just dealing with in the web API world and web IDL getting a new Async iterable and the one we have thought is not iterating in the string and aligns and because it’s a web and the API with async align with this and sequences already do not support strings. + +MF: Then I would like to ask for consensus on this request with the addition of enumeration of the built in positions that will still accept primitives being included in the document. I’ll have to get a review of that as well. I will do my best, but someone has to double check that work. + +RPL: Support from Shu. Are there any objections to going forward with this subject and adding the list of existing cases? No objections. You have consensus. Would you like to, MF, read out what you think are the key points that have been discussed here? + +MF: Key points are the committee agrees that strings in particular and primitives in general should not be treated as iterables in APIs and other language constructs going forward. + +RPL: Thank you. Would you like to ask for a review? + +MF: Would somebody like to do a review of those positions? I’m pretty sure I can get them right. I'm pretty good at finding all occurrences of something in the spec at this point. But I would like somebody to do a review afterwards. So if we could appoint somebody and after that person approves it, we merge it, that would be great. + +SYG: I will volunteer since I requested it. + +MF: Okay, thank you. + +RPL: Thank you Shu for volunteering. All right. I think that wraps us up MF. Thank you. So speed run continues. We are going to move on to the item that we brought forward from Ben, which is normative make default number option in ECMA 402 truncate options before validating range. + +### Conclusion + +- Consensus on the normative convention, but with all existing violations documented alongside. + +## Normative: Make DefaultNumberOption in ECMA-402 truncate options before validating range Presenter: Ben Allen (BAN) + +- [proposal](https://github.com/tc39/ecma402/pull/908) +- [slides](https://notes.igalia.com/p/range-checking-inconsistencies#/) + +BAN: There’s a decision to be made between making forward change to match 262 and make 402 match 402 or leaving them inconsistent. It’s a discussion how to put it concerns things that come from well, well before I joined the committee. My plan is largely get out of the way of that discussion once it starts. So while reviewing a PR in temporal, discovered that 402 inherent behaviour from 402 the way that 402 handles range validation different from how 262 does it. + +BAN: This is the corner of the corner of the case because it involves people using fractional values are integers are expected. Default number option in 402 takes the value and converts it to the number and compares it to the pair of integer range bounds and returns the integer truncation or throws whether it is in bounds. This is used in 402 to process options like minimum integer digits and minimum fraction digits and simply the number of digits to display in the formated version of numbers. So let’s see. I will flit back over to the non-formated version for this. And I apologize for the sort of indenting here. When default number option is checking to see if the range – or the number of digits given is within these two integers, well, it converts it from the number of a mathematical number and test to see whether it’s in range, whether it’s between these two integers and then returns the mathematical floor of the mathematical value version of the number. + +BAN: But there’s these several methods from 262 that you do a similar thing that check whether the number is within in this case hard coded range. They behaviour as follows: Again, I will flip over to the version that is hopefully more legible but might be sort of otherwise zany in formatting. So, for example, number.prototype.to fixed but all of the methods behaviour in the same way. First makes a call to the ToIntegerOrInfinity AO which will convert this number to an integer and then it checks and sees whether it’s within range. So 402 validates whether it’s in range and then converts and 262 does it the other way around. So here is some usage examples. + +BAN: Again, this is something that I would say that no one ever does but there are more things done in JavaScript than exist in one’s philosophy. If someone requests converts a number to a fixed point number and requests that negative 0.5 digits be displayed, I’m not quite sure what that means. That will end up treating it as the number of digits after the decimal point is treated as, yeah, you get one digit. Like wise, if you’re above the maximum which is 100 digits, well, first it truncates it to 100 and then displays it with 100 digits. 402 works the other way around. So we expect if someone specifies negative 0.5 fractional digits, it will call that out of range. Well, the minimum is going to be zero, negative 0.5 is out of range. We can’t do that, we are going to throw. Likewise if it requests – if the value, the number of digits is above the maximum by a fractional value, it will say that value is out of range. + +BAN: Here are the relevant discussions, again, this was discovered by RGN from a PR in temporal. So we have options. The PR that I have up changes 402’s behaviour to match 262’s. First truncates it, first truncates it to the integer and checks and sees whether the integer is in range. The other option is to make 262’s behaviour more strict. So have 262 behave like 402 currently does which is, well, if someone gives a fractional value that is – someone gives a value that is the minimum or maximum either minus a fractional value or plus a fractional value so negative 0.5 or positive 100.5, the other option is to have 262 reject that as being out of range 100.5 is more than 100, that’s out of range. We’re going to call it a range error. The other option is to leave it inconsistent to have in this very sort of odd edged case 402 and 262 behave differently. From our discussions in TG2, we have a mild preference for making ECMA 262’s behaviour more strict, so the value is out of range by a fraction to say that’s out of range and throw therange error. But it is as I understand it from looking back over the notes, it’s a relatively mild preference. So actually our preference is to reject this PR and instead make a PR against 262. That said, this is a matter for discussion. So I am going to hand it over to people to discuss this. Like I said, I’m not necessarily the best person to answer questions about this. I’m going to throw it over to the group for discussion. Again, 402 currently validates and then truncates, 262 does it the other way around, truncates and then validates. I would like to open it up for discussion now. + +MF: Okay, so I see at least two more options here. The first one being that 402 can just reject non-integral inputs in the first place and the second option would be 402 can reject non-integral inputs and see if you can try to get 262 APIs to also reject non-integral inputs. I think that SYG will touch on this later, but I think option 2 and that latter one that I said probably aren’t worth doing. I personally prefer the former alternative that I just recommended that 402 rejecting non-integral inputs. + +BAN: So just make clear that option is 402 rejects nonintegral inputs and then not necessarily 262. + +MF: Yes, just because I think it’s going to be harder for 262. + +WH: I originally wrote the `toFixed`, `toPrecision`, and `toExponential` spec text in 262. Back then the committee consensus was that operations were supposed to be as permissible as possible, which is why these accept fractions and truncate them. Nowadays I think we should just reject nonintegral inputs but trying to retrofit `toFixed`, `toPrecision`, and `toExponential` is not worth the effort. + +SYG: I agree with both MF and WH here. I think given that we already have consensus on for future APIs to be less permissive, I think the right path forward is obviously to – maybe not obvious to everyone. But I think the right path forward is reject nonintegral inputs in ranges it’s supposed to be integral. toFixed and toPrecision are very old. So I think I feel more strong than – stronger than what I said on TCQ, it seems like it will be a lot of effort to figure out if we can change them. I’m not excited to see if we can change their behaviour. I don’t know who is signing up for that on 402, if 402 wants to change their behaviour of this. But I feel like the most fruitful path forward is for new APIs including temporal perhaps that we reject nonintegral and leave the existing things as is. To be clear, that’s a question to the Intl experts in the room, do you feel there is enough momentum to see if it’s possible to do this change? + +BAN: Yes, I believe so. I’ll defer to others in TG2 if people disagree. Currently, this PR doesn’t reject nonintegral values and instead tests the range after coercing. I don’t want to speak for other members of TG2, but I would be perfect fine with investigating whether we can just reject nonintegral inputs. + +SYG: Specifically it’s an ask to the browsers, if they’re willing to check this. + +WH: If we do accept fractional inputs and truncate, then I think the behavior should be as it is for `toFixed`, `toPrecision`, and `toExponential` — do the coercion first (the coercion in this case being truncation) and then do the range check. But, notwithstanding that, my position is also that whenever possible we should not do coercions or truncations. + +RGN: I agree with WH and thank him for the feedback. If implementers are willing, then I think there’s at least broad consensus that the best outcome is to be strict and not truncate at all. But even if we can’t get that, I would love to see alignment on truncation rather than this gratuitous discrepancy. + +PCO: So SYG, you mentioned new APIs including temporal. Generally we haven’t gone back and applied new design conventions to existing proposals, but I guess if we want to go back and apply the convention to temporal, we should have consensus there ought to be a normative change for that. + +SYG: First to respond to the previous point I think was it RG I don’t think the browsers – I can only speak for V8 obviously but I’m not going to sign up for the work of figuring out if to fixed and to precision can change. So the inconsistency will be here to stay for that part. For PCO’s question about temporal, I mean if we all agree that this is the right design and nobody has shipped temporal or fully implemented it yet at this point, this is the best time to do it, right? So why not? + +MF: I just want to give a counterexample to that. I know for at least RegExp.escape, we were paying attention to new normative conventions and we removed a coercion that was happening in the earlier form of the proposal. I don’t recall us ever discussing explicitly whether we would go through proposals and apply the conventions. I think we just address them as we see appropriate. So I would be in favour of applying normative conventions to Temporal as much as implementers are willing to. + +KM: I guess on the similar comment, I don’t think WebKit or Safari has any automated way to see if these things would be compatible. I think it would be pretty hard. It seems like the work would be basically just trying to ship it and then most likely it would fail. It would be a lot of effort for something that probably doesn’t have a high success chance any way. + +PCO: So I might be misremembering, but I think in the – at least one of the stop coercing thing topics from previous meetings, we explicitly said it was a nongoal to go apply this to all of the in flight proposals. I could be remembering that wrong, but that’s what stuck in my Memory. + +RGN: KM said that it would be a lot of effort for likely failure. I was wanting clarification if that were regarding the ECMA-262 operations or the ECMA-402 ones. + +KM: I was talking about the 262 version, yeah. + +RGN: Okay. Do you share that opinion for the ECMA-402 operations? + +KM: That I just don’t have enough knowledge of the usage of to know. So I guess my answer is just, yeah, I plead the fifth or whatever. + +RGN: All right. So, an expectation that it won’t work for ECMA-262 but unknown for ECMA-402. + +NRO: I would say I hear the position to find and opposition of point 1 here. I think it’s – if we agree that being more strict is good, the valueOf being more strict and trying to catch more user errors is much higher than being consistent which is also the reason we’re going with the normative conventions and trying to design better. So I think here consistency is like not good enough point to remove some. + +RPR: Has a plus 1 comment NRO. And then down to RG. + +RGN: Just to get this in the record, one problem with current behaviour is that low-level operations tend to be used by new functionality, and that makes it a risk for spec authors to accidentally inherit behaviour that we really don’t approve of as a committee any longer. If a normative change is not possible, then we should—and I personally will—still pursue an editorial change to clearly mark the misbehaving operations as legacy. We have done that in a couple other cases already in ECMA-402. And it’s important to keep that in mind as we do look forward that even if we can’t fix the old contents, we can put changes in place to discourage new things from using them. + +SYG: I think you answered my question. + +WH: The previous comment was in regards to the 262 operations or the 402 operations? + +RGN: I would advocate for both. But I have more direct control over ECMA-402 and so that’s the one that I would influence personally. + +WH: I think that trying to put comments in `toFixed`, `toPrecision`, and `toExponential` would open up a Pandora’s box. There are many things in 262 which coerce and I would not want to single out `toFixed`, `toPrecision`, and `toExponential` as being the bad guys. + +RGN: Right. For clarity, the pattern that we follow in ECMA-402 is to rename the operation rather than adding a bunch of comments to arbitrary operations. So if you think about things in ECMA-262 like ToIntegerOrInfinity, there are similarly low operations in ECMA-402 like GetOption and what we do with those having behavior we no longer advocate is renaming them to something like “Legacy{VerbPhrase}” and then we reference that legacy operation from all of the legacy APIs. + +WH: What you’re suggesting is, hypothetically, renaming *ToInteger* to *ToLegacyInteger* or to *ToIntegerLegacy* or something like that? + +RGN: I don’t know off the top of my head if that’s warranted for ToInteger, but if so, then yes. ToInteger would be renamed LegacyToInteger and we’d introduce a new ToInteger with the good behaviour. + +WH: Yeah, this is veering off into a completely different subject area and I think that any proposal like that would need to have some significant committee discussion. + +RGN: Sure. I can refer you to parts of the ECMA-402 spec if you want to see what it looks like in practice. + +NRO: I like this approach if we decide that we don’t want the behaviour anymore, with the guide proposal out to make sure that everybody is following the behaviour. + +SYG: To clarify, so I agree with WH that certainly not going to be adding comments to existing built ins, the editorial thing I think it’s basically all in 402 because 402 already has the organization in the spec of basically everything calling out to the same options bag handling AOs and that doesn’t really exist in 262. It may in the future for options bag handling and we should get that right the first time. Right meaning reject nonintegral input for integral ranges. So I think the editorial thing doesn’t really apply to 262 today in that the choice between coercion and no coercion is whether you, you know, do the coercion and that’s not something that we can fix with AOs whereas in 402 like things do bottom out and some option handling AO and easier to do there by calling one of them legacy. + +KG: This is in response to something that PCO said earlier about whether it was a goal to go back and update existing proposals and when we agreed to not do coercion going forward. It’s correct, when I presented, I asked the committee did we want to go update existing proposals and no one spoke in favour of it. So I said, great, we’ll leave things as they are. That said, we didn’t have an extensive discussion about it. It was sort of a brief item at the end of my existing presentations. So if someone wants to push for that, go ahead and push for that. But the place we left it last time is that things that were already in Stage 3, we were going to leave alone. Dot escape was not in Stage 3. What wasn’t in Stage 3, it makes sense to adopt these. Stage 3 proposals we don’t normally make as many changes to. + +RPR: Thank you Kevin. So we’ve got about four minutes left on the time box, Ben. + +BAN: Let me see if I can summarize. So currently there’s not support for making 402 behave in a less strict way that 262 does. Am I correct in there being a sense that TG2 should investigate whether it’s possible to reject integers all together? + +MF: WH and I had the preference that, if we cannot reject integers — if that way does not work for some reason — then it would be preferable to take option 1 that you have here over option 2. That would be making 402 less strict, doing the coercion first. + +BAN: Okay. + +RPR: I think WH is agreeing. WH are you agreeing? + +WH: Yes. Also I think calling one of them stricter than the other is somewhat misleading here. It’s just a matter of whether you do the coercion or validation first. + +BAN: All right. Okay. So since that seems to be the sense of the committee, I suppose I will ask for consensus on this PR to make 402 like 226 coerce and then validate. + +SYG: Wait, sorry. Whether TG2 is interested in that change is a separate question on whether we can do it. And KM spoke to – he doesn’t have an intuition on how widespread the 402 built ins that you’re trying to change is, I also have no intuition at all. I don’t think we have use counters in Chrome to check most built ins. So I’m not comfortable giving consensus that we will do the work. So I don’t know where that leaves you. + +BAN: Okay. Then I suppose the consensus is that TG2 should investigate. + +BAN: Recap of key points: The behaviour in 262 likely cannot change for 402 and as I understand it new proposals going forward, we should consider the more strict behaviour to validate and then coerce and if anyone wants to add to that list of key points. + +WH: This is backwards from what we just discussed. + +BAN: Backwards in the sense there will be no changes to 262 and that 402 should consider the validate then coerce behaviour. + +WH: No. Coerce and then validate behaviour. + +BAN: Okay. Yeah, of course. + +WH: For future things, we should not coerce. + +BAN: All right. + +WH: For future things, we should just reject fractions all together if they’re not meaningful. + +BAN: So 262 not change. 402 investigate whether we should change to match 262 or actually reject nonintegers in future proposals, reject nonintegers. + +WH: Yes. + +BAN: Okay, fantastic. + +RPR: BAN, could you go to the notes now to update that and then perhaps WH, you could be a reviewer on that. It will only take like five minutes. + +NRO: I just had a clarifying question. Is it correct that validate and then coerce and then we’re changing to the other thing but then for the future we’re going to do the other way? + +BAN: Right now 402 validates and then coerces. And for the future, simply reject all nonintegers. + +NRO: Validate first and then reject. + +BAN: So reject if someone gives a noninteger value that is in range or out of range, it is rejected. Validate and then coerce checks it accepts fractional values that are in range in terms of like the integral truncation, so currently it rejects fractional values at the edges of the range, accepts others. + +### Speaker's Summary of Key Points + +- Not worth the effort to change this behaviour in 262 - Value of checking for more user errors going forward is greater than value of consistency +- Future proposals should adopt new normative conventions +- Not necessary to change in-flight proposals (see Stop Coercing Things presentation from earlier meeting) + +### Conclusion + +- ECMA-262 to remain unchanged +- TG2 to investigate whether to should keep validate-then-coerce behaviour or adopt 262’s coerce-then-validate behaviour +- TG2 to investigate whether the relevant AOs can reject non-integral values +- Proposals going forward should reject non-integral values + +## RegExp.escape for Stage 3 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-regex-escaping/issues/58) +- no slides + +JHD: You can scroll down a little bit. The current status of this proposal, the issues we discussed in a previous meeting have been incorporated. All the escaping that is expected is being done. The spec has sufficient sign off, Test262 tests exist and will be merged soon as the proposal reaches Stage 3 and I would like to ask for Stage 3. + +RPR: All right. Are there any questions to JHD on this? MM support with plus one. + +RPR: CDA has support. No need to speak. DLM has support and no need to speak. + +JHD: Awesome. + +### Conclusion + +RegExp.escape promoted with consensus to stage 3 + +## Drop assert from import attributes + +Presenter: Nicolo Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-import-attributes) +- [slides](https://github.com/tc39/proposal-import-attributes/pull/161) + +NRO: So as you might remember, a year ago, I think, we changed – we tried to change the keyword by the proposal from assert to wait because we had to change the semantics and the it was not discovered. So the problem with change the word from assert to with. And we’re not sure whether it was web combatable to remove it or not. I’m happy to announce and thanks a lot by the team for working with me on this that both node and Chrome successfully unshipped the keyword and we can remove the proposal. I will ask you to have consensus to remove the assert keyword. It’s this orange piece of spec text on screen. Then there’s also equivalent piece of spec text on the part, if you can go to 13.3.10. This other case which is check if the import option has the assert property. So do we have consensus for removing this to normative option from the Proposal? + +RPR: On the queue, you have mark Miller with a plus one. + +RPR: And DLM says thanks to everyone involved. Support. Anything more to say DLM? + +DLM: No, that’s fine. Just appreciate the effort. It’s nice to see this. I appreciate the effort everyone made to make that possible. + +RPR: WH with another +1. All right. This is sounding positive. Let’s just do one last call, are there any objections to dropping assert from import attributes? No objections. Congratulations. You have consensus. + +NRO: Thanks everybody. Thanks again to Shu for working with me removing from Chrome. Just for planning I plan to bring the proposal to Stage 4 for next plenary and shift in Safari in Chrome with the new spec text. Thanks everybody. + +RPR: Wow, thank you. Amazing. Really good collaboration and working our way through the journey. So we have six minutes remaining. Do we have any other tiny things to bring in. + +### Speaker's Summary of Key Points + +When we changed the keyword from `assert` to `with`, we didn't know if it was web compatible to remove `assert` because it was shipped in Node.js and Chrome, so we left it as a "deprecated syntax". Both Chrome and Node.js successfully unshipped `assert` now, so we can remove it from the proposal. + +The proposal will be presented for Stage 4 at the next plenary. + +### Conclusion + +- The committee has consensus for removing `assert` from the proposal + +## Atomics.pause for Stage 3 + +Presenter: Shu-yu Guo (SYG) + +- [proposal](https://github.com/tc39/proposal-atomics-microwait) +- no slides + +CDA: Time to start after the lunch break. Two more people join in. Let’s repeat the call for notetakers. + +USA: Let’s start with the rest of the agenda. First up, we have atomics.pause for Stage 3. Are you ready? + +SYG: Yes. Let me share the thing. One second. So as I said before the lunch break, there is no normative changes to the proposal since last time. It is a quick recap, it is a new method on the Atomics object that has no observable behaviour, but gives the implementation a chance to emit a pause instruction that is used in spin loops such as inside mutex implementations. Currently Stage 2.7. Test262 tests have landed. With that go to the queue, which before the lunch break, WH had an item that said bad spec text and like to hear more about that. + +WH: At the last meeting, when advancing to Stage 2.7, we agreed to a couple changes. One of them was ±0, which was fixed. The other one is the spec text presented today has reverse of the meaning of what we had agreed to. The relationship between the iteration of the number and how long the thing waits was supposed to be linear. Instead we got Note 3 that makes no sense and contradicts the rest of the spec. + +SYG: What is supposed to be linear? + +WH: The relationship between iteration number and how long the thing waits. + +SYG: I don’t recall there being consensus that the time Atomic.pause would wait for an iteration number was linear. It’s probably clearer to show the example. I don’t have an example written. I thought I did. Apologies. The idea is that you wait for some bounded number. You pause for some number bounded of times. The iteration number is just a simple for loop that bounds the number of times you would pause. So the longest amount of time you would wait is for iteration zero and then subsequent iterations would wait shorter and shorter time. Usually this would be exponential back off, but you could have other kinds of back offs. + +WH: I don’t understand what you’re saying. This is the same problem that we run into at the previous meeting, in that we were talking past each other. Please be very specific. + +SYG: Very specifically, when you call Atomics.pause with iteration number equals zero, suppose that waits some amount of time. If you call then Atomics.pause(1) that would wait a shorter or equal amount of time than zero and so on and so forth. + +WH: What? Atomics.pause(1000) waits less than Atomics.pause(5)? + +SYG: What was the first thing you said? + +WH: Atomics.pause(1000) waits much less than Atomics.pause(5)? + +SYG: That’s correct. That’s how it’s all been in the proposal. Iteration number is not a hint for how long Atomics.pause waits. What iteration are you in a spin loop, how many spins have you done? So there’s usually exponential back off. As you spin longer, you should wait shorter and shorter. You wait for the most amount of time. Because if you look at the example – I will switch tabs for a second. Can you see this? I don’t need to switch. Can you see this example? + +WH: Yes. We can see it. + +SYG: This is a pseudocode version of how the fast path of mutexes is usually written with a spin lock. They spin for a bounded amount of time. Each time they go through this outer do-while loop, they spin a little bit before going back to try to acquire the lock again. The best practice as far as I can tell is that the initial iteration spins for the longest amount of time and then there’s back off for how short you wait until you exhaust the spin count and then you go to the slow path and put the thread to sleep. And this API is designed so you pass in this counter of spins directly to Atomics.pause. It is not a hint for how long to wait. But it is the iteration number of your spin loop. That’s why it’s called iteration number. + +WH: So that is entirely unclear from the way it’s written now. + +SYG: Is it? Like, it says that the number of times – Note 3 basically says that pause(n) waits at most as long as pause(n+1) and N +1 wait is equal or shorter. Why is this unclear? + +KM: I have a question that might help this misunderstanding. I guess it’s not super obvious why you would want to wait the longest on the first iteration. Sorry to interrupt. I’m not sure if that’s okay. + +SYG: That’s fine. I think it’s because that you – I’m not really sure. I had figured it was due to empirical evidence. Like, every spin lock I had gotten my hands on to read, including Safari’s parking lot, including the spin lock in the allocator inside Chrome, have the exponential back off when the lock is contended and want to spin for a little bit to try to acquire the lock, you try to wait the longest in the beginning and then you just wait progressively shorter. I had figured this was – I don’t know, people knowing how architectures work and had divined this somehow and then figured it out with empirical evidence and I was copying that as best practice. + +WH: For spin locks, you pause for the shortest amount of time before checking again and then you progressively do longer and longer pauses. + +SYG: That is the opposite of every implementation of the fast path of a mutex that I have read. + +WH: That’s how exponential backoff works. I think you’re looking at this as spinning and I’m looking at this as pausing. + +SYG: Okay. Can this be resolved by looking at an actual implementation to see what is actually done? + +WH: Okay. The issue right now is the spec is not intelligible. We’re both looking at it and seeing different meanings. + +SYG: It seems like we disagree on whether the first – whether N equals zero should wait the longest or the shortest. My intention was that it waits the longest. But your contention is it should wait the shortest? That is a different issue than the spec being unintelligible that I think this says that zero is the longest as well as step 2 here that says zero is the longest. We may disagree on what it should do, but I’m not seeing why this is confusing Editorially. + +JRL: This is the same topic. I don’t understand what signals means, but Note 3 is incorrect. It is the opposite of what you intend? + +SYG: How is it the opposite of what I intend? + +JRL: Atomics.pause(n) should wait as most as long N +1 that means N is less than N +1. + +KM: I think that’s what he intends. But that is unintuitive. May not be what you expect. + +JRL: He was saying the semantics you want is N is longer than N +1. The larger the iteration, the shorter it waits. That note is incorrect given the semantic assumption. + +SYG: Can you say why? Wait as long as N +1. + +JRL: Means that N +1 is larger because N can be smaller. At most as long means N can be smaller than N +1 which is the opposite of the semantics that you just told us. + +SYG: That is not my reading. But I can see this sentence is confusing. I would reword and say pause N should wait for a longer – + +JRL: More than N +1. + +SYG: Wait longer than, yeah. + +JRL: I still don’t like this semantics. I agree with WH that the semantics are incorrect. The note is confusing to me. I have no idea what signal sent means in the spec text. + +SYG: Okay. This sentence has no teeth. There is no observable behaviour here. So the signal sent basically – (?) and I wordsmithed a bit in the editor calls and basically says if your CPU has a pause instruction, you can execute the pause instruction. And after wordsmithing it, we decided on support a signal to be more I guess more generic than saying something directly like your CPU has a pause instruction. The signal here means execute the pause instruction. You may want to execute the pause instruction more than once, which again is unobservable because it’s just timing. That’s what “signal sent” means. Are you saying you would prefer more direct wording like execute a pause instruction or something similar, rather than something abstract in case some underlying architecture is – there’s no instruction but some other way to signal that the code is in a spin wait loop? + +JRL: I don’t understand what signal is sent means unless I know the implementation that it’s being sent to. I would like it to be worded in a way where the exact behaviour that you want is reflected here. And that could be by taking Note 3 and rewording it honestly, but then having Note 3 be spec text, saying implementation-defined N waits less time than N +1. + +SYG: Very well. I see that editorially this is less confusing. + +USA: In the queue we have KKL. + +KKL: I feel like I might be able to help us get speaking on the same page. I don’t have an iron in this particular fire, so I’m just going to attempt to help connect some dots. I think that perhaps the point of confusion is the word”backoff” leads to an opposite conclusion if you’re coming from an understanding of exponential “backoff” and retry loops in a fault tolerance system. In exponential back off loops, each subsequent iteration backs off for a longer period in general, whereas this seems to be the opposite where you’re making the assumption if you spun a long time, maybe if you spin more a shorter amount of time, you’re converging on spinning the least amount of time waiting to acquire the contended lock. And that the theory of this is that you should spend less time the closer you get to that – to the point where you might win the lock. Does that sound correct to you SYG? + +SYG: That sounds reasonable. Again, I don’t really know why every lock I have read works this way? But I can imagine that explanation. Which if I can try to reword it, it’s basically we know this is basically contended. We want to try for a bounded amount of time without putting the thread to sleep because that would result in higher throughput to try to reacquire the lock. So we spin the CPU, and we know it’s contended at the very beginning so we wait for like some period of time. But if it remains contended, the longer it remains contended – I think the assumption is that the longer it remains contended, the highly the likelihood it will remain contended to the point where the thread should go to sleep. So because of that assumption, you spin for fewer and fewer cycles the more times through this try retry loop until that you spin for so little and then you still can’t acquire the lock and then go to sleep because that’s the slow path. So in that kind of interpretation, yes, I think your explanation makes sense. + +KKL: So I propose that something that would help editorially is not using the word “back off”, and maybe there’s another term in literature that’s more close to the meaning. + +SYG: My thinking right now is I wouldn’t talk about implementation strategies at all. I would basically only talk about the expected use, which is that you should spin for a bounded amount of time and pass the loop counter to this argument. And then the amount of time paused for smaller loop counters is longer than for larger loop counters. Does that seem clear? + +USA: We are at time for this. Shu, do you think there’s a way out? + +SYG: I would like to request let’s say I will be conservative. I will say a ten-minute extension to talk through my proposed, the editorial change that I just proposed with WH to see if that’s clear? + +USA: We have five minutes overflow. Can you do with five? + +SYG: Sure, I can try. + +SYG: Okay. Thank you, KKL. WH, did what I just said sound reasonable? + +WH: Offhand, no. I think there is a significant issue that we need to discuss, and it’s clear to me that we did not achieve consensus. We thought that we achieved consensus at the last meeting, but we were unknowingly talking past each other. + +SYG: Okay. So I’m not sure what the disagreement is. The disagreement is that you think it’s incorrect design that a smaller iteration number pauses for a longer amount of time than a larger iteration number? + +WH: I was under the impression from the way it was written is that iteration number determines how long the thing pauses. + +SYG: Right. It is the opposite. It does determine how long it pauses, but in the opposite way than you expected. + +WH: Yes. So if that’s the behavior you want, then what you should do is: if you want to pause less and less, you should start with a higher Atomics.pause argument and decrease it over time. + +SYG: I feel like that is less ergonomic from the implementation point of view where you have a simple for loop and number of spins and i <= spin count and i++ and you pass i. In your preference, it would be – you would have to do manual subtraction to pass the iteration Number. + +WH: The issue is that there are different kinds of backoffs you can do. You can do exponential backoffs and quadratic backoffs and linear backoffs. As of now, there’s no way to express — + +SYG: That is by design. + +WH: If the argument Atomics.pause directly controlled how long the thing waits, you could do any of those algorithms however you like. + +SYG: It’s a hint, first of all, it is not a direct control. The intention was for the implementation to choose the best back off strategy given the execution tier that the JavaScript code is currently running in. If this call were to be inlined in the highest tier, then the number of pauses would be different than if you were running unoptimized JavaScript code because for example the JS call overhead is so high, you don’t want to execute multiple pause instructions. Pure implementation hint. + +WH: Yeah, the linear factors, but I’m not ok with — + +USA: Can we continue with the queue? + +USA: KM is next. + +KM: I guess I will make a comment on that point before I go to my comment. You could just run the loop backwards and set i equal to the iteration and upper level and subtract each iteration of the loop and wanted to have Waldemar’s suggestion and less ergonomic. And as far as I can tell reading the code base it is the same amount, we do the same pause with every iteration. + +SYG: Interesting, okay. + +GCL: I just want to say there are a lot of different implementations of spin locks and I know several including the Linux kernel and delay is shorter or longer with different signals to the lock as it’s attempting to be acquired. I think it would be best if we just don’t specify how the delay relates to the iteration count. + +SYG: I mean, I will respond to that real quickly. Recall that this note basically has no normative force because it can’t be observed, and it’s kind of a compromise that people last time wanted a step to say something, because originally step 2 didn’t exist. But okay let’s try to drain the queue as quickly as possible. + +LCA: I also don’t want to add more fuel to the fire. But I just looked through two different respites that both implement spin locks and both of them do the opposite behaviour as wait time increases. I agree with you we should probably not specify any specific behaviour here and let the implementation decide. + +RBN: So just to clarify a couple things. This is very close or in agreement with what KKL was saying as well. But most implementations of spin locks and user accessible spin waiting APIs they’re high-level like this is designed to avoid CPU and spinning on the tight loop of the single thread and avoid contention because you have multiple or trying to compete for resource. So the counter specified or pass to pause is not a back off. It is to avoid contention. And how that generally works is that it’s used as a hint to make certain decisions. Sometimes it will not pause. And this all purely depends on implementations. Sometimes it won’t do anything and be a no-op and sometimes pause, and often times it’s based on how many times the iteration has been seen and how close you are to the context switch or kernel transition that is expensive and as a result, early on like if you’ve already done some expensive work and just entered into the context, you might see shorter or longer wait times. It’s all CPU-dependent based on what the most optimal thing for the CPU architecture determines whether the pause is efficient. As you kind of approach context which you want to check more frequently and avoid the context switch because it is expensive. Which is kind of why the spec text is written as it is right now and indicate the higher the iteration number is, it doesn’t mean you want to wait a certain amount of time, it means that you are attempting to pause more frequently so the higher the number gets, the more frequently you are trying to do this which means you want to have shorter times and want to say this is trying to check for contention. It’s very heavily involved in trying to check for contention and we’re getting and approaching a context switch and know we want to return faster and faster to see if it can get that lock. Once you get to the context switch, then all bets are off. You can reset everything where you’re concerned about with the count to have slower back off. Because now it doesn’t really matter until you start approaching the context switch curl transition with the priority on the thread again. So saying back off is a bad term and we should probably just strike that from the spec. And in a way, we probably should replace the entire spec text of two, it’s just a runtime-specific optimization purely based on CPU architecture and let everybody fill it in because it will always be dependent on CPU architecture and specific implementation and specific tuning concerns of the person writing the code at the very core level right within the implementation and spec text and what it says isn’t going to matter because of that. All that matter is the iteration may not have much to do with how much time you’re waiting and no guarantee it will. + +USA: So we’re at time. SYG, do you think that there’s a way to end this? + +SYG: I don’t know how to make progress from here because we’re debating on something that has no observable behaviour. Like, my current thinking is that I will come back at some point – like, at the next meeting and because doing what RBN exactly said and remove Note 3, replace basically these last two sentences with something that says iteration number may be used by the ECMAScript implementation to determine the amount of time that is paused. But the paused time here is unobservable and we’re talking about on the order of hundreds at most CPU cycles that is a very very short amount of wall clock time. I see Waldemar has something on the queue about diversion behaviours. I don’t think you can check on that level of divergence given how most of the actual timers are. Every call to atomic.pause argument will look like an immediate return. So I will come back next meeting with the editorial change. But I want to be clear that this is – like, there’s no actual normative behaviour that we’re discussing. Is there any disagreement on that? If we have disagreement on that, there’s a deeper issue. Okay, WH disagrees. Can you please engage on the repo by filing an issue. I would have liked to have gotten ahead of this before – I would like to get ahead of this before the next Meeting. + +USA: Thank you, Shu. So I think we could capture the queue from today and come back to this. + +DE: Can we agree, as part of the conclusion, that people who disagree with calling the behavior "implementation-defined" will engage with SYG offline before the next meeting? e.g., WH, MLS. + +USA: Shu, did you agree with the conclusion? + +SYG: Yes, I agree with the conclusion. + +SYG: More a question to address to Michael and WH? + +USA: Would you like to speak to that? + +WH: Also I think it’s clear that we were talking past each other at the last meeting and did not achieve consensus at the last meeting. + +DE: Do you agree to engage between this meeting and next meeting off line with SYG? + +WH: Yes. I also wanted it recorded in the notes that consensus we achieved at the last meeting was an illusion. + +DE: All right. MLS, you have a comment in the queue also. Are you able to commit to engaging between the meetings? + +MLS: Well, I agree with WH. Yeah, our team can engage if there’s no normative text for the argument, we’ll probably ignore it. We don’t know what to do with it. + +USA: All right. Thanks, every one. Let’s move on to the next item for today which is concurrency control. MF, are you ready? + +MF: Yes, I can do that. + +USA: Just before we move on, Shu, would you like to state some of the key points of the discussion to wrap up the conclusion. + +SYG: Key points are there is confusion about the – that there’s confusion about the iteration number argument to the Atomic.pause method, and that this needs to be resolved before the next meeting before stage advancement. + +USA: Thank you, Shu. + +WH: I think it’s more than that. I don’t think the stage advancement at the last meeting was valid, since we didn’t agree what the behavior actually is. + +USA: All right. I believe that has been recorded. MF, you may start. + +### Speaker's Summary of Key Points + +There is confusion about the iteration number argument to the `Atomic.pause` method. This needs to be resolved before the next meeting before stage advancement. + +## Concurrency Control Presenter: MF and LCA + +- [proposal](https://github.com/michaelficarra/proposal-concurrency-control) +- [slides](https://docs.google.com/presentation/d/1rLIzouj1zTr4KdjNrYMZt-FbvEGMPmVeJ8HjOtB6wOU) + +MF: Can I get confirmation that LCA is here. + +LCA: I’m here. + +MF: I wanted to make sure you’re around in case you wanted to add anything. This is a new proposal and looking for Stage 1. Concurrency Control, championed by both LCA and myself. So there’s a lot of background to this proposal. A long time ago, we had a very large iterator helpers proposal. We were able to make progress on that by cutting it down to what we call the MVP, that is the essential methods mostly mirroring the methods that were available on Array.prototype. It went through Stage 3 including async variants of those MVP methods. At the last minute, so I think the meeting following Stage 3 advancement, we realized that our strategy for async iterator helpers was not ideal. We wanted to revise it. At the time, we wanted the understanding of async iterator helpers to be how you would naively write each helper using an async generator. That has some benefits. You know, that is why we had that strategy at the time. But it also has a drawback of limiting the concurrency of the underlying iterators that you’re applying transforms to because of the queuing behaviour that is present in async generators. So we pulled async iterator helpers out of iterator helpers to resolve that and allow transforms to preserve the support for concurrency of iterators that you’re applying those transforms to. So we’ve accomplished that with the async iterator helpers and any of those transform methods do allow – so if you have two outstanding nexts, the promises that they returned and not resolved you will have two outstanding nexts on the iterator. When you map with concurrency, you retain concurrency. But there’s no way at the moment as part of that proposal to concurrently drive async iterators. That’s where this proposal comes in. So while async iterator helpers was split out to add support for preserving already supported concurrency, this proposal is split out from async iterator helpers in order to support actually driving iterators concurrently. So that’s where we are. + +MF: So there are three components of this proposal. The first is this governor protocol which we’ll get into in a moment. The next is what we’re calling Semaphores that are just a counter that implements the governor protocol. And then the third is integration back into the async iterator prototype methods to accomplish this goal we have of driving async iterators concurrently. + +MF: So on each of these slides, I will have a little section that marks off what is nonessential. This proposal I acknowledge going for Stage 1 is slightly overworked, but I feel like it was necessary to show the full vision of the proposal. So, you know, a lot of detail exists here and also nonessential components exist here, but know that that is all still open to change. The parts that are essential are not highlighted. + +MF: So the governor protocol is very simple. It just gives you an acquire method that returns a promise of these things that we’re calling governor tokens and the governor tokens have a method called release that is compatible with the explicit resource management proposal to be disposable. The way that you interact with the governor is you ask to acquire a token. When you eventually do, you later release it. That’s how you manage a limited resource. + +MF: So semaphores, as I said before, they implement the governor interface and they just act as simple counters. So you initialize them with the capacity and then they will hand out — only that number of tokens at the same time may be valid. So if you initialize with the capacity of five and you acquire five different tokens, if you attempt to acquire a sixth, that promise will not resolve until one of those first five has been released or used with the explicit resource management proposal. + +MF: I didn’t even go over the nonessential parts. The first component, let me talk about them just real quick. Yes, we have this protocol which hands out these tokens. That is necessary. But we could also have a class called Governor that is abstract because it throws when it’s constructed directly but things can extend and Semaphore might be a thing that extends it and other complex governors that extend it and with that you can get the helper methods. The helper methods, I will go over them briefly. `with` is a way to execute a function only if there is capacity in the governor at the moment. `wrap` is a way to, you know, do that repeatedly. So you get back a function that does that. And then `wrapIterator` is a way to do that but for iterating instead of for function calling. + +LCA: Just want to clarify that it’s not if – like, can you go back to the previous slide. With will not – it’s not conditionally calling the function based on whether you currently have capacity. But it will wait until you have capacity to call the current function. + +MF: Sorry, yes. That’s correct. So that brings me back to here. Semaphore would extend governor if there is such a class. It would also be nice to be shareable across threads to pass it to workers and share a concurrent resource. And then finally part 3 is the integration back into the async iterator prototype. This has a couple of parts to it. If you recall in async iterator helpers, there is a buffered helper that is added as part of that proposal. Remember, that’s a Stage 2 proposal. And it takes a limit which is an integer and possibly some way to tell it to pre-populate We haven’t finalized that. We will extend the first parameter to accept an integer and something that supports the governor interface. Optionally we add a new helper called limit. Now that we have this governor capability, it would be useful to be able to limit concurrency. That means if you pass a limit of three and then try to next the resulting iterator five times concurrently, there will be only three outstanding promises for the underlying iterator. It will not further try to next the underlying iterator until one of those has resolved. But that is not essential. That can be done as a follow-up. But it’s pretty obvious given this new capability. And then finally for any of the consuming methods on async iterator prototype, we will add a parameter which is a governor or integer so it matches the first parameter of buffered and controls how concurrently they drive their underlying iterator. So the toArray, forEach, some, every, find, and reduce. Those are the consuming methods that are in async iterator helpers, on AsyncIterator.prototype. + +MF: So this part deserves some explanation. Why do we have this really general concept of a governor? Why is a protocol necessary here, why not just integers? Why not always pass three? Well, you can. In any position that we are accepting governors, we are allowing an integer to be shorthand for just that amount of maximum concurrency. Very similar to a semaphore used in that position. Sometimes you have a resource that you want to share among non-coordinating consumers. Let’s say you want to make fetches, you have two different things that will be independently making their fetches and you want to have no more than like five fetches going on simultaneously, but it might be all five, you know, for consumer A and zero for customer B or split three and two. It just changes over time. That’s why you would use a semaphore. You create a semaphore that controls access to the resource. And then they have shared concurrency. It’s shared simple concurrency around still a counter and capacity of five and fixed over time. We generalize further to a governor when we want that concurrency to be more complex. When we say over time this might change. We might have exponential back off that we want to implement in the governor. We might have just resource limits that are dependent on configuration that may change. All kinds of things. That allows you to implement any kind of governor with as complex logic as you want for determining how concurrent access to a resource may be and in a way that may change over time. So that’s why we see this – the need to generalize in these three steps. + +MF: So I am asking for Stage 1 for this proposal today. I want to make very clear what Stage 1 means and what I’m asking for. So I have four points. First is the consumer methods of async iterator prototype will have a way to drive the underlying iterator concurrently. We saw that we accomplished that in the concrete proposal using a new parameter on these six methods on AsyncIterator.prototype. We want a simple way for the common case. We do acknowledge the common case is we just want like one method to independently manage its concurrency independent of what anyone else is doing and in this proposal we solved that by just allowing integers to be passed and it’s a really simple way to ignore the complexity of governors or even of semaphores if you don’t care, which is probably a lot of cases. Point 3, we want to be able to split that concurrency efficiently over non-coordinating consumers. That’s why we introduced semaphores. And point 4, we also want to support concurrency for resources that change in capacity over time or change in capacity based on other external factors that may not just be time. And that’s why we have the governor protocol that allows you to implement your own governors or possibly built in governors in the future that are more complicated than Semaphore. So I would like to ask for Stage 1 today. And at this point, break to take any questions and discuss any Stage 1 concerns. I do have more slides later that we can talk about in the post-Stage 1 concerns and even if we achieve Stage 1, I would like to go through those. But I would like to get all of that Stage 1 discussion out of the way so that we don’t risk going over time on that. + +USA: MF, would you like to pick from the queue. There is a queue. + +MF: You can manage the queue, that’s fine. + +USA: On the queue first we have JHD. + +JHD: All right. So I have a few – I have a number of items of feedback here. My queue item topic is one of them. A number of them are definitely during Stage 2, like post-Stage 1 concerns so I can defer them to later in the agenda item. Maybe I missed it. Did you provide a concrete example for number 4 on the slide or two ago, resources of changing capacity over time? + +MF: I provided one verbally. + +JHD: I must have missed it. Can you repeat it? + +MF: So the example I gave for governor that – what example did I give? + +LCA: One example, for example, is a governor that is backed not by something that is local to the system in memory but backed by the distributed concurrency control mechanism and you have a database that stores an integer that increments and you want to share this governor, I guess, across multiple different machines, and JavaScript doesn’t open databases and you have to open this by yourself and that’s the governor protocol allows you to do that. + +JHD: Okay. If you were talking to an API and it had knowledge that the local machine didn’t or Something? + +JHD: Get back to me when we get to the post-Stage 1 stuff. + +USA: Okay. Then we have a reply by KG. + +KG: Just briefly. Lots of APIs have a thing to do a hundred requests in the first minute and 50 requests per minute thereafter. That’s a very common thing for APIs. + +RBN: So very briefly able to go over some of this proposal. I in a way kind of correlated how I look at things with dot net with the test parallel library and use something along the lines of what a governor does to perform things like partitioning to do chunking and kind of organize how you want to handle your concurrency across various things. I think that makes sense. I do have a few concerns over names. I know I reached out to you, MF, prior to the meeting to set something up. I hadn’t had time to set up something. Try to do it in the next couple of weeks. I have concerns with the name like semaphore. It’s very heavily used in metric multi-threading concurrency and coordination primitives. And I want to make sure that depending on how it’s structured, if you use something like semaphore that that doesn’t preclude something like shared STRUCT 16789 and multi-threading work with MUTEXs and semaphores as a coordination mechanism for shared memory multi-threading and make sure we’re not stepping on each other’s toes with those design. That’s my only real concern. + +MF: I have some ideas for alternative names that we can get to here. + +RBN: Again, those are Stage 2 concerns. That wouldn’t be blocking for me. + +KKL: First I’m very much in support of investigating concurrency and control in Stage 1. You have my support there. I want to pile on with reservations for the name semaphore but for different reasons than I expect most folks have on this call. In particular, governor and semaphore are both steam engine metaphors, but in the steam engine metaphor, a governor is not a kind of semaphore and a semaphore is not a kind of governor. But governors are very, very good term for what is being proposed here, a specific kind of control loop. As long as we’re doing control loops, you might want to look at whether this protocol would allow you to implement something like AIMD, the additive increase multiplicative protocol TCPs and while you’re at it whether the error channel is observable because that is input for that particular class of algorithm. I think this might be sufficient for control delay as it’s written. But totally worth trying out. And apart from that, that’s me. + +MF: Okay. I would love for you to open issues on those so I can learn more about them. + +KKL: Love to, thanks. + +MM: So echoing concern with the name semaphore, but it’s more than that. Concern with entangling this with multi-threading even to the extent of trying to anticipate what the – trying to make it shareable between threads and trying to appear what the semantics should be when shareable between threads. And that goes even beyond whether it’s named semaphore or not. As used within a single agent, this is non-blocking. I assume that the anticipated semantics even when shared between agents is for each agent, it is still non-blocking. And that just violates all of the normal understanding of what a multi-threading semaphore is. So support Stage 1 but with reservations of everything and anything that you do that touches Multi-threading. + +USA: KG has a reply to that. + +KG: Off of the main thread, it can be blocking. I mean, it would be like a different function that you couldn’t call in the main thread with atomics but there’s no reason not to have a blocking one off the main thread. + +MM: I would certainly object to introducing any more blocking operations in those language. + +KG: Outside of the main thread? Like we have lots of those. That’s like a totally normal thing we do now. + +MM: Give me an example of another blocking operation in the language.KG: So the language contains almost no things but every host has these. + +?: Not talking about – + +KG: Most of the common hosts. + +?: I don’t think that’s a useful distinction but we don’t need to get into that. + +USA: Should we move ahead? + +USA: Next we have PCO on the queue. + +PCO: Okay. We discussed this and I was on the position that giving this much control over something that JavaScript developers generally don’t have to think about is kind of un-JavaScript like. I think it would be good to explore simpler designs during Stage 1 like we had the slide with an integer for simple limitation, semaphore for a shared resource, governor for more advanced use cases. Like, how much of what JavaScript developers want to do with this could we cover with the integer, for example? I’d like to see only exploration of that. Because if we’re adding two separate classes so that you can use those primitives to build any sort of regulation concurrency regulation that you want, I think – I can see why that would be useful but I have my doubts about whether that would be generally useful for 99% of what people want from concurrency control. + +MF: Okay. I think I can provide some references to you of npm libraries and their popularity and use where they are currently being used to manage concurrency in this way. And maybe that will help convince you. But I also encourage you to open up an issue where we will talk about this in more detail. + +PCO: Okay. + +USA: Next we have Dan Minor. + +DLM: Sure, thank you. So I think probably fairly similar feedback to what you have already received. We definitely support investigating concurrency control and that makes a lot of sense. We opened a couple of issues with concerns that came up in the proposal meeting and share concerns around naming. Another area that we are interested in seeing is use cases outside of async iterator helpers. At the moment, obviously those exist, but I would love to see those flushed out a little bit more. + +USA: On the queue we have WH. + +WH: This is more of a clarifying question about what happens when you take code which calls `forEach` and start adding governors to these calls. Let’s say you’re calling `forEach` with a governor and function *F* and then *F* itself calls `forEach` with the same governor, what are the consequences of that? + +MF: If it is the same Governor, it is pulling from the same capacity pool, so it will try to acquire again and it's the same thing if we use the Governor in a way that is not reentrant. + +WH: So this could deadlock? + +MF: Um,. + +KG: Yes it can deadlock, any concurrency control looks anything like this will get deadlock. If you have good ideas for avoiding that, that sounds great. + +WH: And the deadlock would show up as what? + +KG: Good question, um, I have not thought about that. I mean yes, the most basic thing is that you just the premises never results because we are waiting for the resource but it is possible that the engine can help you in this case, but I have not thought about that. + +WH: Okay thank you. + +USA: Next on the queue we have SYG? + +SYG: I will really do not want this thing to be called a semaphore. Given the mutual exclusion thing that is usually called semaphore. I would be much happier like an accounting Governor or something but that is the only feedback that I have. It is certainly not a Stage 1 blocker. + +KG: How is this different from a semaphore? + +SYG: It has a very different API, like this thing that hands out tokens rather than a counter. I understand the conceptually it is counting but have more exclusion synchronization Primitives. + +MF: I think we have seen enough opposition to the name “semaphore” and that we should already be investigating other names anyway. + +KG: I would like to understand what the objection to this name is? Because it makes it hard to explore other names. If the objection is that this does not look like semaphore, then sure, I guess I don’t understand in what way it does not look like the semantics of a semaphore so – + +MF: Maybe we can chat in matrix. + +USA: On the queue we have JHD. + +JHD: Um, I can wait until we are in post Stage 1 stuff. + +USA: Okay Chris would you like to go first? + +KKL: Sure, I am just follow on from the last and if there is a semaphore in this language, I would expect to have an API-like increment and decrement and where it would block if the resource is unavailable and it is my expectation that the language should never have such a thing. But if it did, I would expect it to look like a semaphore in another language. Um, and I wanted to clarify that this HI could lead to deadlock and deadlock is not possible without something like a true semaphore but that is mostly a nomenclature difference and I wanted to clarify that the classes of deadlock, data lock, and live lock. Deadlock means that the program cannot make progress due to cyclic depends tea and deadlock means it cannot make progress because it is spinning the CPU and the program can continue to make progress in other events. And yeah. My expectation is that in mistaken this API could cause data lock. As is possible with cyclic dependency among progress among other things. + +MM: Really does not have much to do with the current topic but sinceLL: offered a taxonomy of lock names I wanted to put gridlock and it is like deadlock because of insufficient buffering space because if there is more buffering space it would deadlock later. + +USA: Thanks for that. MF would you like to ask for Stage 1? + +MF: Yes so from what I have heard, the only possibly Stage 1 blocking concern was from Igalia and PFC about us potentially investigating the wrong area, and we should be investigating only the needs that are satisfied by a simple integer. Can Igalia speak more about whether that is Stage 1 blocking? Or how they would like to proceed? + +NRO: It was not blocking, but we would like to see those resources that you mentioned. + +MF: Okay so additionally investigate whether a reduced form of this proposal could provide most or all of the benefits that we are looking for? + +NRO: Yes we are fine with investigating but with the investigation we need to solve other things. + +PCO: That seems like the kind of thing that is something we should investigate during Stage 1. + +MF: Then in this case I would like to ask for Stage 1. As a reminder, this is what this proposal’s goals were for Stage 1. + +USA: For you to speak up and add any words for support or disagreements in the queue? + +USA: I think we already heard a couple of folks express support earlier. So nothing in the queue, and MF up, and DE says yes on Stage 1, thank you DE. Yup, MF you have Stage 1. + +MF: I would like to leave the remainder of my time – + +USA: So far we have six minutes and one item from JHD + +MF: Okay, JHD do you mind if I go over the listed items? And so relationship to the explicit resource management. I know in the explicit resource management proposal we had intentionally left the door open for an acquiring API for resources. It goes hand in hand with the proposal that maybe like automatically invoked by the using syntax when acquiring a disposable. I do want to investigate of how that relates to this, whether Governor is a more specific version of that. So I hope to work with the champion of that proposal, RBN, on figuring that out. As far as open design questions for governors, should the protocol be Symbol-based? But acquire is also nice. But like how often are you manually invoking it? I don’t know. Should there be a synchronous acquire that will throw if it cannot immediately provide you with a governor token or like returns null or something? Maybe. Please talk about that in issue 2. And maybe if we have a Governor class, maybe the constructor should be a convenient way to construct a Governor without subclassing. And I think this is probably unnecessary. I think Governor is a good name and KKL supported it. And possibly Regulator is another name I can come up with. + +MF: And so semaphore's alternative name could be CountingGovernor. This is one recommended by SYG. And I'm perfectly happy with calling it a CountingGovernor. I think that describes what it is and it may not be as initially intuitive as semaphore was but other people thought semaphore was possibly misleading. So in order to drop that baggage, CountingGovernor is fine. + +LCA: I will add this to the matrix, Governor.Semaphore. + +MF: Sure we can do that too, that will help if we have a Governor class, which I do hope that we can have and I think that is really nice. For semaphores, I am looking for cases where people feel that idle listeners might be useful and this is where the semaphore is at capacity. Sorry, no, um, yeah – at capacity, I mean there is no Governor token. Not all of the possible active Governor tokens but zero, there is nothing currently using that resource. And there seems possibly concerns about semaphores across agents from MM, so if you can outline that possible issue we can make a decision and I would also like to discuss the possible benefits of that. I think there are valid use cases. And finally, for async iterator and the reduce parameter, if you noticed is kind of gross, and it comes after the initial value, and I don’t know how to really resolve that better. If you have ideas for that, because then it will force you to provide initial value, which you may not want to do or be able to do. I would like to give the remaining time to JHD now on the queue. + +JHD: Okay, so this is definitely all post Stage 1 feedback. And my initial reaction was like there is too many nouns and too many classes having to do too many things and a lot of these things can be perhaps simpler and over time I can make more concrete suggestions but this is general feedback for right now. The wrap method seems like it is just parens arrow with. And so it does not seem that it is useful. And then I had a question about the dot burden and all the asing iterator buffer. And it looks like buffered it to give you asing iterator that will give a Governor parameter like preset. Right? + +MF: Buffered does not change from how it is in async iterator helpers. It is already existing. + +MF: It just also accepts a Governor. + +JHD: But what I am wondering say I call it with a 3 or something, um, does that mean that is the concurrency that is already used by the other helpers? After that? + +MF: Um, no that is the number of – + +JHD: Oh okay. Got it. Okay, so the concurrency and methods, the methods below that, are not directly related to the buffered helper, that was my confusion? + +MF: No, no that concurrency parameter is how those methods are driving the underlying iterator and so again how many outstanding promises they will keep. + +JHD: Okay. Thank you. + +USA: And we are on time? Okay let’s see. There is nothing in the queue. + +MF: Okay great! If you think of anything else, please open an issue. Thank you. + +USA: Would you like to dictate a summary? + +MF: Not really? + +USA: Could you edit it into the notes? + +MF: Sure. Sure I will write it in the notes. + +### Speaker's Summary of Key Points + +- concurrency control advanced to Stage 1 +- there was much concern about the naming of Semaphore +- there was some skepticism about the motivation for the full Governor abstraction from Igalia and Mozilla +- MM had concerns about cross-Agent sharing of Semaphores + +## AsyncContext Presenter: Chengzhong Wu, Andreu Botella + +CZW: All right, I will share my screen. It is black for me, I don’t know. + +ABO: This is Andreu, and so this the AsyncContext update about integration, and some – next slide. + +DE: If I can jump in, I got in touch with some framework maintainers about AsyncContext, and people are generally very excited about this feature, and both for client and server. Some use cases include: + +- keeping track of what the component is +- performance tracing +- maintaining hooks state +- Generally, allowing adoption of async/await in frameworks + +I had some hopes that AsyncContext would make sense for the equivalent of React Context, but these are already implemented efficiently by referencing the component tree, so there is not a lot of demand for that particular usage mode. + +CZW: We also – next slide please. So we designed syntax API and use case and this is a [INDISCERNIBLE]. So, this is start of API which is using stable subset and this stable subset is about AsyncContext. And open tell met tree and so with this part of syntax, we can have the benefit of that storing global variables because in the lab and also have the support of – + +ABO: Yeah, so, proposal 100 is proposal for the web integration and we have API that takes a callback, and so snapshot is active and so the idea of the general approach that we are going with is basically apply snapshot.wrap of every callback that is on the API. So the snapshot is stored whenever callback is passed, and then when that callback is called, then snapshot is restored. Feel free to take a look at the proposal. Like we want a feedback from more developers and web developers and vendors and anyone, and here we have an example of like we call set time-out inside of context that set v to run, and we can observe that v.get is equal to 1. + +ABO: So, events is essentially the main – most of the complications, very complex. So the same basic general idea applies and you call listener with a callback or call .onClick whatever with a callback it will store a snapshot, and when the Senator calls it will use the signal shock, and there is relevant snapshot whenever the API is called, and we will see more of this later, but this snapshot is not going to be exposed by default because like in order to make this work consistently, you would have to thread a lot of snapshot and multiple API’s and most events can expose that. So here we have specific example that this is going to be the initial like in our initial proposal to for the web integration, and the error rejection and event is fired when you have rejection and this event object would have rejection snapshot property which would be a context of snapshot with the context where the project was rejected and we have something for the error event is when you have unhandled throws, uncaught throws,, and that will have a snapshot property. + +ABO: So here you can see there is like this is the context size object at the time that promise is rejected and at the time the host promise reject – I forgot the name of that Host hook when that is called. And another thing is what is the active context whenever you run a module, when you have module evaluations. So one of the goals of the module system is that module execution should be deterministic and because if you import the same module multiple times, that should only be loaded once. And you should not be able to observe that. So the idea is that there would be a host provided initial snapshot that is set whenever you import a module or whenever you evaluate a module, that would be the context at the top level of the module evaluation.ABO: And the proposal is Stage 2, there is still some more work on integration but most there is still some investigation to be done with the integration with this proposal but the work integration and so the we will see the feedback and how we resolve them. Do you want to continue, CZW? + +CZW: We can see if there is any comments in the queue? + +MM: I just got myself in the queue, so the most of what I heard, here is what a particular host and particular browser can do with this context and hour it is integrated with host level semantics, and all the final and neutralized but does not finalize what the neutral semantics is but some of the things – well one thing in particular stood out to me as being different than what I understood to be the agreed language semantics which is the capturing and propagation of the snapshot through promise resolution. What I understood is that the propagation was always on registering the callback, never on the resolving of the promise. And seems like it requiring violating that; am I confused? + +ABO: So, um, the like whenever you have like promise you set up a promise resolution that like – the only thing that happens when you have like when you have like a promise resolution, this already happens like the spec calls into a host hook to like queue the rejection event. And this would like 100%. And so creating like new context at that point and expose it on the rejection event. And for hosts that does not have reaction events and that simple exit the process about there is a rejection this would not have any change in behaviour or any global state. + +MM: So with an unhandled rejection, the host action of unhandled rejection that sounds reasonable but the unhandled rejection does not happen at the point of rejection. It happens at the point where the language notices that the rejection was not handled. You need to capture the context where the rejection happened, right? + +DE: Can I address here your question Mark, and I am on the queue? + +MM: Please. + +DE: So, yes, for promises, `Promise.then` is consistently handled as well as `Promise.catch`: both reactions consistently run within the context snapshot where the reaction was scheduled. So whether you do `await` or you do `.then` or `.catch` or `.finally`, it is always executing within that "registration" context. + +DE: Now, our proposed behaviour with callbacks in the web platform is to basically do that as well. Just like Promises, they restore the context snapshot which was active when the callback was received as an argument. Sometimes there is other contextual information that is relevant. And so, the solution we are going for and rather than doing something fancy and automatic is to make it so that it is as if you constructed a new AsyncContext.Snapshot at the point when some interesting thing happened, and that is made available somewhere else for explicit use. + +DE: For example, when you reject a promise, the snapshot is sort of eagerly captured. And then if it turns out it is not handled, then that snapshot will get passed to the event handler, exposed as a property on the Event object. + +DE: An alternative semantic is, when promise is allocated and capture snapshot in case it is an unhandled rejection. These two sound different but turns out that they are actually similar. And we were not really able to find that cases that one behaved differently than the other for developers and we are proposing where it is at the time of rejection. Note that, when the platform has an API which returns a Promise, there might be no JavaScript active when the Promise is rejected, and so the platform must cache the Snapshot earlier, e.g., when the Promise is allocated. + +MM: You should note by the way the idea of capturing these is attractive to me. I like the idea of it. I am just – what I am raising is how it fits with the rest of the SA men ticks and you answered that very well, and I will clarify two things. And that it is only – if you are capturing the rejection, it is only want rejection, and not on fulfillment. + +DE: That is right. Our proposal is that, on fulfillment, nothing is captured. + +ABO: It should be noted that this is 100% in the host like inside of the host. And when a promise is rejected, it will call the promise rejection host at that time like if you call the reject handler, and at that reject function. At that time synchronously the host will get called and it is the host that stores that and eventually fires the 100 rejection event. And so it is the host who is taking it is snapshot and saving it somewhere. + +MM: What happens when ever promise rejection in the language, there is a call out to a host hook to every rejection? + +ABO: From my understanding is that the V8 doesn’t do it like that. Like, V8 implements the host hook. And like it is all with some blink of host callback. And I guess other implementations are similar. But yes, that is the way that the spec defines it. + +MM: Okay, um, so I am satisfied. And this all sounds like it is going in a good direction and this was update rather than a phase advancement and I will register that I am happy. + +ABO: Okay, um, we also have a few possible follow-on proposal – + +DE: Before we go on to the follow-on slides, I would ask everybody and the plan is to propose this for Stage 2.7 at the following meeting and we need to go through rounds of review on integration but other than that, we believe this proposal is completely done and want to take out time with the queue to hear out any concerns that people have. And one example of a concern would be this web integration does not give me the context that I need for XYZ use case. We have been able to think of a lot of theoretical things that people might want. But we are trying to go for minimal version right now. Because that should be simple for everybody and simple for us to learn the feature and for people to implement it and for the specification. But we do not wants to make it too simple. And the following slides going the direction of possible extensions. But we did want to – our intention is to just be proposing this thing for Stage 2.7 next meeting and so I want to give a bit more pause to see if anybody else has more comment for Stage 2.7? + +CDA: On the queue DLM? + +DLM: I got to take the time to get review of the HTML integration, so I will be following up on that this week. But not off hand that would be done in time for Tokyo. + +DE: Okay, thank you. + +CDA: Anybody else in the queue? + +ABO: As part of follow-on proposal, and we mentioned dimension, this is like some other context that is not currently being used, and I wanted to mention this. So as I previously mentioned, for many events there is another relevant snapshot which is not the one where you call a listener and this is snapshot that is active whatever you play that causes the event to be fired. It is called, and so there is HTML element, and you call the click method and that will fire click event synchronously and that would be the snapshot where you call onClick or xhr.send will start a fetch and load event and many other event and the snapshot for those would be when the snapshot where you call send. So each specific event of class could add property to add snapshot and none of those unless you want to count error and rejection as this kind of relevant snapshot. + +ABO: So those are only ones that are present in the initial roll out but if you find use cases and if this breaks your work flow or if you really need it, please tell us. And we might add it in the initial roll out but the idea is still hopefully go adding more like eventually if we find that there are more use cases to eventually add those in the future. We want to know which one of those snapshots are important. Do you want to continue with the rest? + +CZW: Thank you. And so we have feedbacks from the community that the current API is already satisfying the use cases but there are some improvements and the experiments on the notation on the context variables and so the API and syntax variables are strictly scoped anded conditions in the libraries will never be able to effect any similar cause in the code. Like in this example, this library can be an async function but it is also asynchronously involved, so this asynchronous implementation and I think, and by await promise notice dissertation in the library will not be visible in the scope of user function. And we are not proposing this change – but there are alternatives, – next slide. + +CZW: There are some improvement that can be done in follow-on extension to improve the experience by utilizing explicit resource and using declaration, and with that proposal, the notation is still kept using declaration and it can help us reduce the number of closures and nested callbacks with the run API. Also, since with the generators are well supported, so you can say that even with yield in the scope using the declaration can still preserve the variable Values across yield. And next slide please. + +CZW: Also, there has been request in the community that they want to observe changes notation in library code which can be observed as written value with variable. We are not going to include this behaviour in the current proposal – this pattern is different from the specific legacy node.js callback style value passing, so it can be mutations can be visible to parent async scope when you observe the fulfillment as callback as well. So in the community, they are requesting a new kind of variable and context variable and in the follow-on proposal, and if they find the motivation to be sufficient to support the people. So we are considered this but we are not going to add this new kind of variable to the proposal at the moment. And that is all the possible follow-on extension. And we can go to the queue if there are any questions? + +CDA: Nothing right now. DE? + +DE: So does anything that the follow-on proposal is a good or bad idea? For example, I expected that the SES folks would find continuation variable to be a bad idea? What do people Think? + +CDA: CM? + +CM: My intuition, and I think DE called it correctly, is this is probably a bad idea, but I don’t understand the subtleties of what is being proposed well enough to just fling that out “Oh this is a bad idea, don’t do that”. I don’t understand it enough, and I think this warrants further exploration and a more detailed explanation of how this would actually work. + +CZW: There is a document linked on the slide, and it describes the semantics comparison between the continuation variables and their proposal variable and some of you can review the Document. + +CM: Okay. + +CZW: Thank you. + +DE: About this one in particular, I want to note that some people involved in performance monitoring, have found that it is – it would be nice to have this kind of variable, but however at the same time, the open telemetry node does not use this. So at least you can do certain kinds of API’s without this and this is just the fact that no support enter with which will give you some kind of capability similar to this. + +CZW: Well the `enterWith` is much more of a feature and there is not declaration one before That. + +DE: Yeah you are right, thank you for the correction. So, are there any particular events that people think they may want to find the causal version of? We would be happy to accept this input offline but this is the main thing that we have been scratching our heads with, and how complex do we have to make this. Any input There? + +KKL: I want to – I am on the queue. In the previous life I was involved in the open context project, and precursor to open tel., and so this comes to propagation that will come to bear on this, and this is interesting because reverse context propagation reverses the arrow in the way that goes from one to many to many to one. Which sometimes implies that you have to use completely different data structures to pull data out of reverse context. I don’t know whether this comes to bear on this, and really just wanted to propose that we talk through it on TG3, and if you can add yourself to a future agenda that would be great. The thing about reverse context that happens if you fanout to multiple parties and any of them might return reverse context is that the return context has to be aggregated some way in order to be made sense of at the conception site. So get – the get on the last line of this example would effectively require a reduction of possibly many changes. And in any case, again I don’t know whether that comes to bear, but it would be great to talk through. + +DE: Yeah, we have had a bit of debate about this within the group, and I agree that there are some multiple context that are relevant. Which is kind of why we would hoping to go with this simple policy. But yeah, let’s get on the TG3 agenda. + +CZW: Yes thank you and that sounds good, and as I want to point out too that merging is also mentioned in the document in 94, and we have been discussing it what the possible ways to merge multiple context in this context to one final value. So yeah, I think it would be valuable to the idea TG3 general. + +KKL: To add to that and the solution that we arrived at in the open context proposal was that the reduction would be delayed until the receiver because like middleware would be not be in a position to know how to do the reduction. So you end up with aggregating a set and then at the point of the get, you would – that the caller would provide a reducer. + +KKL: Also that work was abandoned because there was a lot of complexity. + +CZW: Yeah, and I think the values, that is kind of the design that we want to avoid because that is a basic point of margin in the documentary – um, in the document that we don’t want to – there is a decision to the user instead. + +CDA: All right nothing else in the queue. + +CZW: All right, and sounds good, this is just a Stage 2 update about the web integration before Stage 2.7, and we appreciate the feedback on the web integration because yeah, before we ask for Stage advancement we appreciate any feedbacks on the integration. Thank you. + +DE: Okay, any additional feedback you have about this proposal, would be very welcomed and we have meetings every two weeks that on the TC39 calendar and getting in check with in Champions by any means would be really helpful. And again the intention is to propose Stage 2.7 in Tokyo. + +### Speaker's Summary of Key Points + +This is just a Stage 2 update, AsyncContext will seek stage 2.7 advancement at the 104th meeting of TC39 diff --git a/meetings/2024-07/july-30.md b/meetings/2024-07/july-30.md new file mode 100644 index 00000000..2bfe371a --- /dev/null +++ b/meetings/2024-07/july-30.md @@ -0,0 +1,827 @@ +# 103rd TC39 Meeting | 30th July 2024 + +**Attendees:** + +| Name | Abbreviation | Organization | +|----------------------|--------------|-----------------| +| Daniel Minor | DLM | Mozilla | +| Eemeli Aro | EAO | Mozilla | +| Michael Saboff | MLS | Apple | +| Waldemar Horwat | WH | Invited Expert | +| Chris de Almeida | CDA | IBM | +| Jesse Alama | JMN | Igalia | +| Jordan Harband | JHD | HeroDevs | +| Ben Allen | BAN | Igalia | +| Frank Yung-Fong Tang | YFT | Google | +| Linus Groh | LGH | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Chengzhong Wu | CZW | Bloomberg | +| Dan Gohman | DGN | Invited Expert | +| Nicolo Ribaudo | NRO | Igalia | +| Samina Husain | SHN | Ecma | +| Jason Williams | JWS | Bloomberg | +| Justin Ridgewell | JRL | Google | +| Ashley Claymore | ACE | Bloomberg | +| Keith Miller | KM | Apple | +| Istvan Sebestyen | IS | Ecma | +| Dmitry Makhnev | DJM | JetBrains | +| Chip Morningstar | CM | Consensys | +| Mikhail Barash | MBH | Univ. of Bergen | + +## Normative: fully define Math.sqrt + +Presenter: Michael Ficarra (MF), Dan Gohman (DGN) + +- [PR](https://github.com/tc39/ecma262/pull/3345) +- no slides + +MF: Yes. All right, I am copresenting this with DGN, so DGN at any point feel free to stop me and add context. I will defer to you to add anything else you want at the end. So, we are trying to actually define what `Math.sqrt` produces. Currently it produces an implementation-approximated value approximating the square root. And as context this is part of a broader effort to try to discover what the actual web requirements are for the values that are today implementation-approximated. We are aware that there are some requirements on both the accuracy and certain invariants for these implementation-approximated values which are used in math functions. And this is just the first step in that direction. So, maybe someday in the future we will be proposing additional properties that must hold or additional bounds that the value must be within, but today we are just taking what should be the easiest step, which is for Math.sqrt. We should be able to fully define it. WebAssembly already has f64.sqrt which a lot of JavaScript implementations also support. And wasm square root is fully defined as well. So it should not be a burden. And I have also run tests on SpiderMonkey, JSC, V8, XS, LibJS, and Chakra + +MF: These are linked from the agenda if you want to follow along but both of these I recommend that you read these if you are more interested in the more general problem. And then 3345 is the PR and I ran tests and we can run more but I was satisfied by 10 million nonrandom negative doubles, which are the acceptable inputs for Math.sqrt, through the engines I had available to me. We see that all of those engines match on all of those 10 million inputs. I can confirm a larger set if somebody is skeptical, but this gives me good enough evidence that all of the engines do produce the same result, which is the exact result. + +MF: And it makes sense that a lot of those implementations need to have the wasm instruction and they have the exact result. And I don’t think it is a burdensome requirement, at least in modern day architectures. DGN is an expert in that, and he should be able to elaborate if anybody is curious. And the change looks like this. We just remove square root from the list of functions that do not give precise results. And instead of saying implementation-approximated Number value representing the square root, we just say it is the square root converted to a Number. So as far as specs change, I think that is all I wanted to present. DGN did you want to add anything? + +DGN: No, I think you covered it all. + +USA: All right we can go to the queue if you would like? First in the queue we have Waldemar. + +WH: I think this is a great change. Back when we first defined square root in ECMAScript it was not clear if there exists an efficient algorithm to compute the exact rounded value but now we know that such algorithms do exist, not only for square root but for trig and exponential functions as well. We now know — the numerics people now know how to compute the exact results rounded to the nearest double for a lot of these. + +MF: I welcome your participation in furthering issue 3347. + +USA: Next we have Dan Minor. + +DLM: So we definitely support this change, and my only question is about the history and Waldemar has covered that quite well, so this is great, thank you. + +USA: Next is SYG? + +SYG: I support and thank you for doing the leg work, since this the already state of the world for web engines today, and it looks good to me. + +USA: That is it, I from the queue, we will need support. Would you like to ask for consensus on the pR? + +MF: I would like to ask for consensus on the PR 3345? Given there is nothing in the queue. + +WH: I support consensus, of course. + +USA: Thank you WH. + +DE: I support it. + +USA: Thanks MF, great congratulations to MF and DGN. + +MF: Thank you, that was very easy. + +### Conclusion + +- Consensus on PR #3345 + +## Intl.Locale Update in Stage 3 + +Presenter: Frank Yung-Fong Tang (FYT) + +- [proposal](​​​​https://github.com/tc39/proposal-intl-locale-info) +- [slides](https://docs.google.com/presentation/d/1wWDYg5BF1wNNAdC6YbBKvRtKS1acyvIP0eLkWCH2c7M) + +FYT: So hi everyone I am FYT and working for Google and today I will give you an update and hopefully we could wrap the thing up pretty soon. Um, so basically, I give you an update recently happen on this location, and then try to seek a consensus for a small PR hopefully that is last one we need to touch for this proposal. + +FYT: So basically the `Intl.Locale` API and working with the ECMA4 too and try to exposed week data. And we can start day, weekday and hour cycle and such in the locale, and so having advance Stage one in accept 2020 and Stage 2 in January 21 and april 21 and I have and Safari has implemented and we found some change and problems particularly with go to a—after Stage 3 unfortunately that we change some property to function. And there is a way we can try to expose because we have to deal with first day of week issue and we are adding additional properties to the locale and we have been back and forth a couple of times. And there is some issue dealing with how do we treat a locale, and first day and we resolved thing internally. And we are changing back and forth from represent that thing as a string or a number and back toString because if we get out of that there is locale consistency there, and all things that are readdressed and we talked before and what we try to address right now is basically a couple of things. + +FYT: One thing is that there was a hanging thing about whether how does the calendar and fw keyword in the locale impact the first day of week resolution. And that was not able to drag because we are pending UTS 35 external spec to clear it out. And unfortunately that was clear out, and the last release of CLDR in October 23rd and so we missed kind of our last time we can be able to address that. But eventually, we actually pick up whatever they have and try to basic block one of the issues and good thing about after that November meeting last time, so that external dependency got addresses. The other thing is that they are thanks for people’s helping that we were calling for community support of polyfill and that is now added to improve that thing and I think the implementation so far is an issue and we are looking into it and there is improvement of the test262 tests and I think that made this API a bit more mature because we have more test coverage and also polyfill right now and Safari is shipping and chromium is updating to and synchronized. And one thing we need to address I think during the complete polyfill last time we changed it in PR 79 we missed something in the spec, so whenever the first day of week prototyped return correctly, and when we tried to get that information into the get week info, first day, which supposedly was the final result one and that should return the number and we forgot to do the conversion, so really it returns a string, and it should not happen that way and it was not intended but the spec tests we did not do that, and also, this is therefore this is a normative PR and we need to fix that. And the other thing is that kind of arguable and whether it is normative pR but I think it is because I think Anba from Mozilla has design to change spec test to make explicit about and precise about how to deal with ISO 8601 calendar. And the spec test was having because that value is we would describe it as a locale-dependent value so it could be interpreted as depending on that thing for locale but he expressed the desire to make it more explicit processing and say well whenever this happens, it should happen that way. + +FYT: So we agree—I think we talked about that too and we change the spec test for special hash joining and for other locale, and the on the calendar we are clear about how does that happen, so we basically say Wal 8601 (?) make sure it always return that way. Which is aligned with ISO 8601 and we put that in the PR79 and this will make it more precise and explicit in an implicit vague description to a very explicit line by line way to describe that. So this is the one thing that we like to seek for the consensus from this group. + +FYT: The other thing is that there are additional issue file to approach but we believe that these things should be deferred, and probably considered for the future extension because there are some other complications and one of the issues that people and is whether we can get a possible currency code back from locale, we think this is out of scope. We think because when we go to safety we tackle this part and this is a bit more complicated than other issue about currency code. So I think we think there too late, and we can delay that and maybe later on adding that. And so now the Chromium V8 M99 is (?) conversion and so the spec is not the—I think Safari, and so it is shipping and Mozilla is pending and I think that additional that we have polyfill, and Chromium I tried to synchronize whatever last time we talk about November about a getter function thing in 129 (?) and we are working on that and we will probably have that soon. + +FYT: So here is my request to committee is to have consensus about PR83 and I am happy to discuss in TG2, and I think people are supporting and forget who is supporting that and I think there is notes in the PR about explicit support with their and committee discussion but I think we need to bring up TC39. So any questions. + +USA: So first in the queue is DLM. + +DLM: Yeah I checked just now there is still an open issue with PR8 3 which it looks like some additional clean up to the spec test after Anba’s patches are (?). So I think we are supportive of the change as long as that review comments is addressed. And so yeah, see. So that is it right the following where it says two weeks ago. So maybe you need to move a line there, I am not sure. And so this point of view our approval of this is conditional on Anba’s review comments being addressed. + +USA: That was the whole queue, FYT. + +USA: DLM has another thing to add. + +DLM: Yes update on the Firefox status, and yes we do have initial notation that Anba has been working on and has not been updated but given that he is still spending time to finish this off, and if for some reason he is not able to, I think another member of team will pick it up and so we will say something we intend to eventually. + +USA: Thank you. And that is the queue again, FYT would you like to fully request for consensus?. + +FYT: Yes, please. + +USA: We will give it a minute for any expressions of explicit support or any dissent. There is nothing on the queue still, and you have consensus. Congratulations, FYT. + +FYT: Thank you. + +### Conclusion + +Intl.Locale Update has consensus and advances to Stage 3 + +## unordered async iterator helpers for Stage 1 + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/michaelficarra/proposal-unordered-async-iterator-helpers) +- [slides](https://docs.google.com/presentation/d/1EDhoV4Vyh1Pte-W2qWvvCeLwhQ61dMFT55GNg0VeDLM) + +MF: So let me bring you back to this little diagram that I showed you yesterday. Yesterday we talked about the concurrency control proposal, which was a follow-on to async iterator helpers that allowed us to drive them concurrently, not just support concurrency. Additionally with the async iterator helpers we began exploring how we can provide more optimal use of the concurrency available to us and that included the opportunity to drop the ordering constraint from some of the helpers so that they could operate more efficiently. By "dropping the ordering constraint", I mean the results are not necessarily yielded in the same order that they were yielded in the input iterator. So we decided, though, that that would expand the scope of async iterator helpers too much by remaining in that proposal and decided to split that out. So this is that proposal. I wonder what we’re going to split out from this proposal once we—it’s probably something. So you might recall that KG previously presented this series of diagrams in the async iterator helpers proposal when talking about the ordering constraint and I am simply copying them here to show you that sometimes when things are not the next thing that’s being yielded they hold up these concurrency slots even though they’re ready to go, whereas if we drop the ordering constraint, they can be yielded in any order, and we can use up as many slots as we have as efficiently as possible. + +MF: So that’s the motivation of having unordered helpers. I also think that oftentimes you’re using async iterators in a way that ordering does not matter to you. You just want certain work to be done and for you to have those results and even if you do care about the order, sometimes you want to do that work in a way that is unordered and later reapply the order. If you zip with the natural numbers, you can kind of reorder things asynchronously later after the work has been done more efficiently. + +MF: So this is what the proposal looks like. For the AsyncIterator helpers proposal, we have a method that is added to `Iterator.prototype` called toAsync that gives you an iterator that inherits from `AsyncIterator.prototype` and has all the same helpers. + +MF: What this is proposing is we do something similar and add a method to `AsyncIterator.prototype`, and I called it unordered, and that gives you an iterator that inherits from `UnorderedAsyncIterator.prototype` with all of the MVP helper methods. The difference is the unordered async iterator helper methods are able to operate more efficiently because they don’t guarantee order or at least some of them are able to operate more efficiently. + +MF: So what does that look like? Well, in the user code that you write, you can simply add unordered at some point, you know, the earlier the better, and drop that ordering constraint and then everything after that is able to be done more efficiently. So I guess the important part is that it is—and I’m kind of skipping ahead of myself here — it is not easy to accidentally mix unordered iterator helpers with ordered iterator helpers. We have explored with the async iterator helpers proposal possibly just having two variants of each method like `map` and `mapUnordered`, but that would allow you to create these chains of method calls where you are mixing unordered and ordered helpers and as soon as you add back the ordering constraint, you lose all of that efficiency from upstream. So this design allows us to achieve that goal. + +MF: So like the last proposal, I will clearly outline what Stage 1 means. What I’m asking for today. What we are looking to do is find a way to provide more performant async iterator helpers when it’s okay to drop the ordering constraint, which I believe to be a very common case. So the way that we achieved that in this, again, overworked proposal today is the addition of an unordered method to `AsyncIterator.prototype` and the whole `UnorderedAsyncIterator` prototype and all of the methods on it. We want that to be easy and convenient to use because yet again, I expect it to be common to want this. And we don’t want you to accidentally re-apply the ordering constraint. We want the way you do this to make it hard to mess up. I also wanted to note that this will depend on the concurrency control proposal for the iterator consuming methods because this provides no benefit if there’s no concurrency, right? Dropping the ordering constraint with a concurrency of one is the same thing. So I would like to ask for Stage 1. I think I also have some post-Stage 1 design questions, just a couple, if we do achieve Stage 1. + +USA: Okay. Let’s see the queue. First we have DLM. + +DLM: Sure. First of all, what you want to investigate makes sense to me and so did the concurrency control topic yesterday. It feels like very logical kind of follow on from the initial work. The only thing that I would like to say is it feels like we’re getting pretty far down the rabbit hole and Chrome has been shipping iterators for a while and I’m hoping to get this week and I’m wondering if we’re going far down the direction without getting feedback and seeing how the initial stages of this have been used. But no problems. This makes sense for Stage 1 to be investigated. + +MF: Yeah, I have personal experience at least with using the iterator helpers. And actually not even realizing it, they’re just—I expected them to be there. But I do understand we don’t really have too much community feedback. But at least when we talk about maturity, I would say that the async iterator helpers proposal is close to 2.7, so I feel like we’re building on a fairly solid foundation. I don’t personally feel that we’re looking too far down the road here. But you are right that it is possible that the things that this depends on are still kind of open to change. + +USA: All right. Next on the queue, we have SYG. + +SYG: So I agree with DLM there and going down the rabbit hole thing. I would like to see using validation. That would be great. User meaning web developer at large for me. For asyncIterator helpers the demand from web developers was kind of overwhelming clear. Everybody as you were saying kind of expected them to be there. For async that is less clear to me. For unordered async that is even less clear to me. And TC39 kind of operates in this method where we design everything up front and we ship it. And normal software development process, I would have expect some kind of incubation process here to get a better signal of developer demand. Since that’s not how TC39 works, I also have to same concerns as DLM. + +USA: Next on the queue, we have MM. + +MM: First of all let me just also say support Stage 1 completely. I’d also like to respond a bit to SYG and DLM, which is that given the support for iterator helpers, it would be very surprising to not have at least that much support for AsyncIterator helpers. That doesn’t speak to this proposal. But it speaks to AsyncIterator helpers I think does not need that much empirical validation from the argument. And any case, the question that I have with instanceof behavior. In particular, does AsyncIterator prototype inherit from unordered AsyncIterator prototype or vice versa or just distinct prototypes? + +MF: The answer is, no, there’s no inheritance relationship. I had investigated that. I’ve talked about that with KG, maybe a dozen times. And it doesn’t seem like there’s much benefit. I would love to hear if you have use cases for – + +MM: The main one is just a least-surprise issue, which is if people ask of an unordered asyncIterator instance, are you an instanceof an AsyncIterator, it seems a little bit strange from the substitutability principle for it to say no. But that’s it. The instance is the only reason why I find the further inheritance tempting. + +KG: I have the opposite opinion about what LISCOV said here. And the normal AsyncIterator is opposite constraint and map over it and iterate over the results concurrently you get things in order and unordered ones don’t. You could perhaps have it the other direction, or you consider performance LISCOV principle. + +MM: You are correct. The principle would have the inheritance go the other way. And that also—that does make sense to me. And as you explained it, it makes more sense to me than the one I started with. + +USA: That’s it. We have also +1 from CDA and +1 for Stage 1 from luca as well. So only really positive statements so far. Let’s see if somebody else is going to add—okay, KKL same +1. +1 from WH. So you have Stage 1. Congratulations. + +MF: We didn’t ask for official consensus. + +USA: That’s true. Do you want to do that now? + +MF: I would, but before that, I would like to just make the acknowledgment that I appreciate both DLM and SYG voicing their concerns that we’re possibly looking a little too far down the road and I want to try to alleviate that by saying I would not to try to advance this to even Stage 2 until we both have a Stage 3 async iterator helpers and more experience with async iterator helpers, and hopefully that will make them more comfortable. + +MF: Yes, I would like to ask for Stage 1 now. + +USA: Wait. We already hard a lot of support so far. Let’s give it a minute or so to see if anybody has any more to add. Nothing on the queue yet. You have consensus. Congratulations on Stage 1. + +MF: Thank you. As I mentioned, I have a couple more things if anybody wants to give feedback, I would be happy to hear. Again, this proposal, just like the concurrency control, is more worked because they are split from further along proposals. This question is further along. This would be the name of `toAsync` and name of `unordered` don’t really match even though they’re doing kind of the same thing of giving this new prototype. Should they match a little more closely? Should async iterator helpers be updated to say `.async()` or change the name `toUnordered()`? I don’t know. + +MF: Something I haven’t listed on this slide but I did want to add and I forgot to add was if you look here, `AsyncIterator.prototype` doesn’t currently have a `.toAsync` either. The only thing they share is MVP methods. Should `AsyncIterator.prototype` have `toAsync` and also `unordered`? It basically would be so you can call them whether or not you know you have an async iterator or unordered async iterator. Would that be helpful? I don’t know. + +MF: And then the third point: should we require the concurrency parameter for these? The concurrency parameter In the concurrency control proposal is optional. But with these unordered helpers, concurrency of one is bad. You don’t want to do that. Should we require them? Should they be different from async iterator helpers in that way? I would like to use the rest of my time to hear any feedback on those questions if anybody has them. + +MM: So on the coercion methods, independent of what they’re called, I do like the idea that when you introduce a coercion, that the thing itself coerces to itself, IE, the target of the coercion honoured the same coercion operation as the identity function. + +MF: That’s my intuition as well. I was leaning slightly towards that. That would be possibly then asking KG in the async iterator helpers proposal to add `toAsync` to `AsyncIterator.prototype` but we could also do that later. It’s not the end of the world. + +USA: That was all for the queue. No, next we have WH. + +WH: If you’re proposing to change API of the methods, then a question I have is why do you need a separate object instead of having both ordered and unordered versions of the methods on the AsyncIterator prototype object? I think I can guess the answer. + +MF: I covered that a little bit earlier. It was to solve this last constraint here that it should not be easy to accidentally mix with ordered helpers. So we went with this design. You know, we considered having ordered and unordered variants of async iterator helpers, that’s how we originally were incorporating it in that proposal. But with this split, I have gone with this design so that it is virtually impossible to accidentally mix ordered and unordered helpers. + +WH: I assume you can have ordered helpers and switch to unordered mode and then run unordered helpers? + +MF: If you switch to unordered mode the only way of calling ordered helpers is Function.prototype.call on the ordered iterator helper. You could also, if we don’t require a concurrency parameter, use a concurrency parameter of one to turn them into ordered helpers. At that point you’re intentionally shooting yourself in the foot, and I’m okay with that. + +WH: No. What I meant is, before the call to `unordered`, you could run some helpers and then call `unordered`. + +MF: That’s true. You definitely can. And I think that’s fine. You may have a dependency on ordering in this first helper. + +WH: Okay, thank you. + +USA: That’s the rest of the queue. + +MF: Thank you for the feedback everyone. + +USA: And thank you, MF. Would you like to take a minute to dictate key points and a summary? + +### Speaker's Summary of Key Points + +MF: Yes, I would. So unordered async iterator helpers has been split out of async iterator helpers and async iterator helpers will no longer be considered in that problem space. Unordered async iterator helpers has reached Stage 1. There’s concern that we may be designing too far ahead, so the champion, myself, has committed to not trying to advance unordered async iterator helpers to Stage 2 until async iterator helpers has further advanced and we have more experience with async iterator helpers in the field. + +MM: I would like to interject. One additional point about the coercion operations, which is if you have the iterator and you want an unordered AsyncIterator helper, you should not have to do two coercions to get there. + +MF: Interesting. I’m open to that. That’s definitely a Stage 2 concern. But I’m open to adding that. + +USA: Okay, great. So that’s that for this item. We’re going to squeeze one more in. So JHD, are you prepared for—let me check is error. + +## Error.isError for Stage 2.7 + +Presenter: Jordan Harband (JHD) + +- no proposal +- no slides + +JHD: Yeah. If you scroll down,CDA, you’ll see that I do have a bunch of unchecked items in the Stage 2.7 section. So I do not believe it’s ready to ask for 2.7 today unfortunately. Hopefully my spec reviewers can take a look at it sooner than later. But either way, I wanted to talk about the PR 11 on the repo and get consensus on the proxy based behavior on `error.isError`. So in issue 8 a number of things brought up about why it would be bad idea for another thing that Pierces proxies. KG has the comment about the perspective of creator of proxies is primitives and functions and proxies and arrays and that adds a fifth item to the list. Additionally there’s a comment that MF made in a different issue which is that this would make object prototype 2 string conflict with `Error.isError` because it does not proxy pierce and `Error.isError` would and you would be able to determine with the combination that an error—something is a proxy of an error even though `Error.isError` attempted to mask that. I find all of the arguments to be very strong and so I put up question 1 that removes proxy piercing and reduces the spec text down to a much more minimal set that is effectively is it an object with an error data internal slot? + +JHD: I was hoping to ask for consensus that we would remove the proxy piercing. At which point the only thing remaining would be spec review and next meeting is ask for 2.7 and HTML has been put up and not received any review. You can find the index.HTML file. I believe conceptually the HTML integration PR means everyone’s constraints that there is no—it is expected and encouraged that platform exceptions are considered an error and are not differentiated from language exceptions by this method. So DOM exception would return true from the predicate for example. The exact editorial means by which that’s achieved in HTML is of course consider altered by the normative concept I believe would be the same regardless. I just basically wanted to—I will go to the queue. My intention is to work through this issue and then hopefully there would be no other reason not to advance it to 2.7 pending spec review next meeting. + +NRO: You’re presenting tomorrow another second method. Is that also not doing proxy piercing? + +JHD: Correct. I believe that proposal—I’ve only recently signed on to champion it. But I believe that proposal has never attempted to do proxy piercing. It wasn’t built to do that. So the question didn’t come up. But yes I would say that based on this same discussion, I would expect no proxy piercing but it might be different because array dot is array is proxy pierced and I think that is something that can be discussed tomorrow. + +NRO: Thank you. I think it would be good to keep the two proposals aligned. + +JHD: Agreed. + +NRO: I prefer the nonpiercing behavior and happy with the proposal and I will defer my question to tomorrow. Last time I checked it was not doing proxy piercing. + +JHD: Right. + +MM: So I agree this should not do proxy piercing. I am fine with this proposal going forward. But it’s worth noting explicitly the other irregularity that this proposal engages in and especially worth noting so it doesn’t become a precedent. First of all, the reason why I like it not proxy piercings also to avoid a precedent. We don’t want to expand the things. And the other irregularity by proxy piercing is this test internal property on a non-this argument. In general the hazard there is to make practical membrane transparency harder or to make it less transparent. And in this special case, the reason why I find this to be plausible is that given this proposal, practical membranes going forward will probably reflect an error on one side by recreating an error on the other side. And this trades off one form of transparency to another. Somebody adds a property to the new error, that property is not seen on the old error because the error itself is not a proxy. As someone who has written many membranes, I think that’s a fine loss of transparency practically. But I would caution against people reading into this a precedent for more tests of internal properties on non-this arguments. Since is template object—template array did come up as a question, I want to say I think that’s a very different question, it has very different complexities and become increasingly uncomfortable with it for reasons that do not reflect on Error.isError. + +JHD: Thank you MM. I will make sure that the summary of this item includes that does not exist a precedent in either direction for checking internal slots on arguments on static methods. + +MM: Thank you. + +ACE: My queue item says, I think following on +1 to not proxy piercing and also nice—it’s kind of a shame if you proxy pierce there’s a chance it will throw an exception that feels like everything you don’t want to happen inside of a catch handler where this is likely to be used. And then also as one of the spec reviewers, if emerges the spec looks good to me. + +JHD: Yeah, I mean, it is definitely weird to have a predicate that can throw. So I prefer it for that reason as well. + +SYG: Add some pedantic coloring to platform errors returning true for isError, in my mind the litmus test is not that platform errors should always return true for Error.isError. If the host makes an error that has error dot prototype on the host error prototype chain and the stock chase of the magic capability that native JS errors have then it should be true for `Error.isError` and built like JS error that DOM exception does except from the spec point of view doesn’t have the error data internal slot. That’s like a spec detail basically. If it acts like a built-in JS error, it should return true for Error.isError. If the host wants to expose platform errors that don’t have those capabilities they should not return true for Error.isError. + +JHD: I completely agree with your take on it. I can’t think of a better term than platform exceptions because I don’t think anyone has chosen to create native errors that aren’t like host errors that aren’t—that don’t meet the criteria you mentioned. I agree with all of the nuance you discussed. I think ACE already talked about the throwing. + +### Conclusion + +- PR removing proxy piercing will be merged +- This proposal does not constitute a precedent for checking or not checking internal slots on arguments (not the receiver) of static methods +- will return at the next plenary to ask for 2.7 + +JHD: Okay. So since I don’t see anything else on the queue, I will merge the PR that removes proxy piercing. So here is my summary. I will merge the PR that removes proxy piercing. This proposal does not constitute a precedent for checking or not checking internal slots on arguments of static methods and I’m going to come back at the following—at the next plenary meeting and request stage 2.7. + +MM: For specifically non-this arguments. + +JHD: The receiver arguments as opposed to the receiver. + +MM: Right. + +JHD: I will tweak the notes to make sure that is clearly expressed. Thanks everyone. + +USA: Thank you JHD. + +## Temporal update & bug fixes + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-temporal) +- [slides](http://ptomato.name/talks/tc39-2024-07/) + +CDA: Temporal. Do the champions of the proposal that I just named have a consensus on how we’re going to pronounce that? + +JHD: That’s bait, CDA. + +CDA: What? + +JHD: That’s bait. Trying to start a fight with someone. + +CDA: I heard champions call it both. Retract. Withdrawn. + +PFC: I don’t think there is a current consensus. + +PFC: I wonder if we can get consensus to delay my presentation while we watch Chris’ cat. This is fascinating. Everybody except me can do both. + +PFC: (Slide 1) For anyone who hasn’t met me yet, my name is Philip Chimento, one of the proposal champions of TEM-poral or Tem-POR-al. I won’t say how it’s pronounced. I work for Igalia and I’m doing this work sponsored by Bloomberg. Thank you. + +PFC: (Slide 2) I’m here to give a progress update and request some normative changes to the proposal. The progress update, very similar to last time. Temporal is in our opinion close to being done. The focus of the champions group right now is making sure that all of the implementations that are currently being worked on are successful. We’re doing this by fixing corner case bugs and making editorial changes to improve clarity and simplicity of the text. Now, you remember that in the previous plenary meeting in June, we landed a large change to remove outright a lot of APIs due to concerns about the complexity of the proposal and the binary size of JavaScript engines that implement it. Going forward, we will be assuming that this large change that we made in June has resolved these concerns. Please speak up as soon as possible if that’s not the case. + +PFC: (Slide 3) So we see no reason to delay. Please by all means go ahead with your implementations, ship them unflagged. There’s no restriction on shipping them flagged anymore since the ISO 8601 string annotations have been included in a published standard. If there is something preventing you from shipping Temporal, please let us know as soon as possible, and we will do our best to try to resolve it. Don’t wait. If there’s going to be a need to make changes, we want to make them as early as possible so that other implementations have plenty of notice. We are still having a regular Temporal champions meeting. But the focus is now squarely on “how can we help implementers?” rather than “what do we need to change?” And it has been for some time. If you are implementing the proposal, please feel free to join in the champions meeting. It’s biweekly Thursday at 8:00 Pacific time or if that time doesn’t work for you, we will happily meet another time that does, if you have questions. + +PFC: (Slide 4) Here is a graph that I made earlier this month showing the current test conformance of implementations that have a partially complete Temporal implementation. SpiderMonkey passes 96% of the tests. V8 and LibJS three quarters. JavaScriptCore 40%, and Boa is at 23% and they have either landed since the time that I wrote this slide or are still about to land a change that would get them to 32%. So things are looking good. + +PFC: (Slide 5) We have a small follow up from last month’s meeting. If you remember, we had an item proposing to make all valueOf methods of Temporal objects the same function object in order to reduce the number of built-in objects. And the same for toJSON methods. Last month in the plenary, Mozilla asked for more time to investigate it before agreeing to it, so we agreed to come back to it later. I have since heard that Mozilla investigated the feasibility of this and their position is that we should only do it if we still get the feedback that Temporal is too large to ship after all the other changes. Now, since I mentioned earlier that we are assuming that the concern has been resolved for now, we are proposing not to pursue this collapsing of valueOf and toJSON methods at this period of time. If anybody wants to do this because of size concerns, please let us know as soon as possible. + +PFC: (Slide 6) Another follow up, I added this slide yesterday. Yesterday in the plenary meeting, we had a discussion in the context of the item about truncation before or after range checking. Some people in the discussion suggested to apply the “Stop Coercing Things” convention to Temporal. We originally didn’t go back and apply this convention because we had consensus at the time it was adopted not to apply it to Stage 3 proposals. It sounds like some folks were nonetheless calling for this. We talked about it in the champions group. We would caution against making nonessential changes in Stage 3 like this. But if the plenary happens to have overwhelming support for doing this, let’s get agreement on it now. But because of the short notice, that agreement is not going to be in the form of a PR that we can look at and decide whether to adopt. We do not want this to delay the proposal any further. So we also don’t want to bring the PR back to the next meeting. + +PFC: (Slide 7) I looked through the proposal and wrote down all of the APIs where we could potentially stop coercing things. There’s a list of them on this slide. Not all of these seem like they should change. So, yeah, I would ask if there are people who feel strongly about Temporal changing these APIs to stop coercing things, then in that case I would like to get consensus on changing it in principle and having the champions group go through and decide which of these it makes sense to change. So, for example, toLocaleString seems like a candidate for not changing because toLocaleString in ECMA-402 creates an instanceof `Intl.DateTimeFormat` that is an API that already existed and therefore cannot stop coercing things. That’s one example of something we shouldn’t change, there could be others. So my request would be if you feel strongly that we should go back and do this, that we give a sort of blanket approval for the champions to investigate which ones of these make sense and then change them. + +PFC: (Slide 8) All right. I’ll get on to the bug fixes that I’m going to ask for consensus on. + +SYG: Did I just miss? Was it my connection or did everything go silent for a minute? + +CDA: It was your connection. + +PFC: What part did you miss? I can go back and repeat that. + +CDA: I had a point of order about seeing the list of APIs but now it’s—sounds like your sound cut out as well. Are you okay now SYG? + +SYG: I think I’m back now. Sorry. + +CDA: I’ll post the link to the slides as well, just in case. + +PFC Yeah, it’s on Slide 7. Should I move on with the bug fixes? Do we need to go back and redo some things? + +SYG: Please continue. + +CDA: Great. + +PFC: (Slide 9) All right. Turns out the time zone database has a corner case, you know, over a hundred years ago Toronto switched to daylight saving time at 11:30 p.m., they skipped the hour between 11:30 and half past midnight. So that means the day of March 31st, 1919 started at half past midnight and the algorithm for calculating the start of day didn’t take this into account because it is the only such case in the history of the TimeZone database where this happens, that a day did not start on a whole hour boundary. + +PFC: (Slide 10) So, yes, we would like to fix the algorithm to take this into account. This affects a couple of methods: Temporal.ZonedDateTime.prototype.startOfDay, .hoursInDay, .round, .withPlainTime, Temporal.PlainDate.prototype.toZonedDateTime, and parsing a date-only string with a time zone annotation. I would like to thank Andrew Gallant (BurntSushi) for discovering this edge case. + +PFC: (Slide 11) Here is a short code sample of what would change. So the result of hours in day on that particular day would be 23 and a half instead of 23. And then asking for the start of day, asking to change the time to start of day or parse a date-only string with a time zone annotation, or convert the date to a ZonedDateTime without the accompanying PlainTime, all of those would previously give 1:00 a.m. on March 31st and now give half past midnight on March 31st, which is the correct answer for these. + +PFC: (Slide 12) The other adjustment we would like to make, there’s a certain rounding operation where you can round a duration. Durations can have calendar units and round to a number of calendar units and specify a rounding increment. So, for example, this operation here rounding a 9-month duration to the next highest ceiling increment of 8 months, relative to January 1st on a leap year, is going to give 16 months. This is all fine. This is an anticipated and intended use case of the duration round method. + +PFC: (Slide 13) It’s complicated if you try to do that while simultaneously balancing to a larger calendar unit. This is unclear what the programmer should expect. So if you add `largestUnit: ‘years’` to the options, you could interpret this as rounding to 16 months and then balancing to the largest unit of years and then the result would be one year, four months. You could also say the result of this operation is going to have a months component that is divisible by 8 and therefore one year and zero months is the next highest result within the constrained set of results from this function. So this is an edge case, this particular combination of options with smallestUnit being a calendar unit and largestUnit being a different larger calendar unit. + +PFC: (Slide 14) We believe that we actually just never considered this case. It didn’t come up in real world use. It was found as a result of an implementer looking for corner cases. If we wanted to decide on the behavior for this, ideally we would want to research in which situations actual users were using this operation in the real world. It would complicate the rounding algorithm for unclear benefit. Given we’re at Stage 3 and we already had a push for simplifying things, we prefer just to not support this case, where you give that particular combination of options, and keep things simple. So concretely, the proposal is to throw a RangeError if you give the following combination in the options of Temporal.Duration.prototype.round: a roundingIncrement greater than one, and a smallestUnit of years, months, weeks, or days, and a largestUnit not equal to smallesUnit. So this edge case was discovered by Andrew Gallant again and Adam Shaw who is implementing a Temporal polyfill. + +PFC: Any questions so far? Looks like there’s some things on the queue. + +DLM: So I prefer to not see any more Temporal API changes at this point and pretty clear when we discussed and they would be design principles with new proposals and in particular with temporal at Stage 3, as pointed out in development for seven years as an implementer I would like to not see more API changes if we could avoid them and in particular my understanding of stage 3 is that proposed changes would come by implementation feedback. I don’t believe that’s the case here. And since I’m next on the queue, I support the bug fixes that you’re proposing. + +PFC: Thanks. That certainly works for me. The existing consensus is to not go back and apply “Stop coercing Things” to Stage 3 proposals. This would be just sticking to that. MF? + +MF: So, yeah, this is not going to be like a requirement or anything. But I do think that for the—on the coercing point, if we could with very little effort go through all of the existing 262 APIs and know which ones would be web compatible to remove coercions from today, you know, just magically somehow, we could do it. But we know that that’s a lot of work. So we’re not pursuing that effort. But for things like this where we know today that it’s very likely that we can remove the coercions, I would think that it’s worth that amount of effort to make that change, but of course, implementations may have different opinions about that. That’s what my opinion is. I would ask that we just try to make that change. I think it’s fairly minor. It should be considered as minor as the Canadian start of year change. Why not just do it? + +PFC: Okay. I recognize that opinion as well. + +CDA: There’s a reply from SYG. + +SYG: I want to see if DLM can give more colour to why is against it? Is it like the principle of changes to Stage 3 things? Is it something in practice with the SpiderMonkey implementation? + +DLM: It’s just the principle. ABL has been doing a huge amount of work to get this ready. If we were to do it, it would be a small change. But it’s a small change on the steady series of small or larger series over the course of years. But it’s not an implementation concern. It’s more a principal concern. + +SYG: Thanks + +DE: I apologize for muddying the waters a bit by voicing support for removing coercion in Temporal in the Matrix chat. When the coercion topic came up, this is one of my original concerns that we not apply it to Stage 3 proposals. If it were possible to do this very cheaply, I think it would be nice. But I guess the—I would really want to conclude today on in principle whether we’re making this change. If we don’t conclude today we’re making this change, we’re not doing it for Temporal because as PFC said, it’s already shippable. The work that PFC did that went into the removal was kind of more significant than initially expected. I think he could tell it was going to be a lot of work. But there are just a lot of places where coercion happens. It wouldn’t be realistic. + +PFC: In particular, I think it would invalidate a lot of the Test262 tests that we have. I mean, it might reduce their number which would be nice. But it would be a non-trivial amount of work. + +CDA: That’s it for the queue. + +PFC: First I would like to call for a consensus on the two normative PRs that fix the bugs. Is there any objection to this? Or any other explicit support? We already have explicit support from DLM, thank you. Can we conclude this has consensus? + +CDA: I support this as well. Do we have any objections? Hearing nothing, nothing on the queue, you have consensus. + +PFC: All right. Then as for the stop coercing things item, it doesn’t sound like there’s strong universal agreement to go back and do this. So can we say there is consensus on leaving things as there are? + +DE: I’m on the queue, yes, I support reaffirming consensus on doing coercion in temporal. Are there other opinions? + +CDA: Yes. That makes sense to me. But I’m a little bit—you just took it off the queue. But it seemed to contradict what you mentioned on the queue about no coercion. + +DE: Oops. I meant no coercion related to – + +PFC: How about let’s say no change? Can we have consensus on no change? + +DE: Consensus on no change. In theory, we don’t need consensus. Because the topic was raised, it’s great to be reaffirming that so there’s no confusion. + +CDA: That’s a great point. I am supportive of that. I will +1 that. And remind that the “Stop Coercing Things” was to indicate that for a general guideline and that’s something that’s the baseline rather than impossible to have an exception to it and it makes sense if it ever makes sense. not seeing anything on the queue, any objections to that? All right. + +PFC: Thanks everyone. + +CDA: No changes. + +PFC: I typed up a proposed summary for the notes which is listed here. I will paste this in the notes and add a sentence about the reaffirming the consensus to not have any changes with respect to coercion. And that’s it from me. Thanks. Just inside the time box, I think. + +CDA: That’s great. My only request would be if you could please update the notes right away. + +PFC: Of course. + +CDA: Some promise they will do it and then they do not. + +PFC: I will do that right now. + +### Speaker's Summary of Key Points + +- Temporal is nearly done, and the focus is on helping implementations get to completion. Implementations should complete work on the proposal and ship it, and let the champions know ASAP if anything is blocking or complicating that. Follow the checklist in #2628 for updates or feel free to join the champions meetings. +- Collapsing valueOf and toJSON into identical function objects will not be pursued at this time. +- There are several places in the Temporal API where coercion of input arguments takes place, at least some of which are for good reasons. +- There are two bug fixes to consider. + +### Conclusion + +- Consensus was reached on two normative changes: one to fix a TZDB corner case in calculating the start-of-day of March 31, 1919, in Ontario, Canada, and another to disallow a particular ambiguous combination of options in Temporal.Duration.prototype.round(). +- The existing consensus not to go back and apply the Stop Coercing Things principles to this Stage 3 proposal was reaffirmed. + +## Joint Iteration naming discussion issue 27 + +Presenter: Ashley Claymore (ACE), Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-joint-iteration) +- [issue](https://github.com/tc39/proposal-joint-iteration/issues/27) + +ACE: I will give MF the floor to begin with. + +CDA: Again? + +MF: Sorry. Yeah, I just wanted to quickly preface this discussion. ACE reached out to me about this topic just after reaching Stage 2.7 and I am fully in support of still making a change here. Our Stage 2.7 typical rule is that we don’t ask for any kind of frivolous changes not required of things we discover during implementation, but because he reached out, I purposely put off doing any testing work so it would be no burden to me. So I don’t want that to be in the back of anybody’s mind that we shouldn’t be making a change here to the name if that is what we desire. + +ACE: Thanks MF. I will share my screen. It should work. I’m sure I’ve given Webex permission in the past. If at least one person could confirm, I’ll assume everyone can see. + +CDA: We are looking at spec text. + +ACE: Thanks. Great. So a reminder on another aspect of iterators is this proposal from MF `iterator.zip` or joint iteration. And the proposal has two methods, iterator zip where let’s say you give it an array that dribbles even though you path in an iterable but I think a lot of people think about this, they imagine passing in an array and there’s also an alternative second sibling method that is currently called `Iterator.zipToObjects`. And this is the one I’m talking about today. For context for people weren’t at the last plenary, the original naming was `Iterator.zipToArray`, `Iterator.zip`, and then `iterator.zipObjects`. And then at the last plenary decided to drop to array and go iterator zip for the iterable because that’s how most things are in the ecosystem already call this method in low dash and underscore and rounder and all of those great utilities. So that’s why we kind of have this `zipToObject` name to distinguish competitor `zipToArray`. What I was thinking about in the last plenary was maybe `zipTObject` made sense when we had `zipToArray`, but now we don’t have zip to array. Is this really the best name? Maybe we just want to double-check we’re all happy with this. I think we could maybe do better. Worse case scenario, we say this is fine, and we proceed with the current naming. + +ACE: So what made me think about this was there’s another proposal which full disclosure I am one of the champions for, which is proposal await dictionary which has kind of a similar flavour to this proposal. So we already have promise all in the language today and you can pass in an iterable but it’s just easier to think about arrays. Pass in an array of promises and then you get back a promise containing an array with the results which has the same shape as the zip, you know, you pass in an array of iterators and then you get back an iterator of arrays. And I’m sure the category theorists amongst us will say, oh, this is like a bifunctor monad, I’m sure a name of the pattern and they follow the exact same shape from the typed level. And `iterator.zipToObject` is also follows the same shape as this other proposal promise await dictionary that is solved effectively the same problem. It gets a little bit unwieldy when you’re passing in things based on the order, you know, works for two things, maybe three things, but beyond three things it really starts to become less human readable to keep in your mind what is the full fifth, sixth thing you’re passing in when you’re then reading it back out. So zipToObject is here to help say don’t worry about the order of things, just give things a name so you pass in an object where the keys kind of name the iterators. So you can then reuse the names in the result. And the promise await dictionary proposal is the same thing but the promise all. Promise all gets a bit unwieldy and when you’re passing in a few promises, it’s fine. When you start to see people `promise.All`ing ten things and then array destructing the same things out, you start to lose confidence until the order is correct. + +ACE: So due to the similarities, I can imagine if we moved forward with `iterator.toZipObject` that it would set a loose precedent that this pattern would be a name we would want to use in the other places where the pattern works. But when I thought about `promise.allToObject`, it just didn’t feel right. And some of the reasons I don’t think it feels right that we’re not really going to object. It’s not like when you have toString we are taking something and turning it into the string. The value we are getting back is an iterator, we’re not getting back an object. We’re getting back an iterator. In another sense we’re not going to an object, because the input argument itself actually must be an object. So really we’re kind of—nothing is becoming an object. What we’re doing is like turning an object inside out instead of having an object of iterators, we end up with an iterator of object. So it feels like the thing that’s really important here—oh, the other thing is that arrays are objects. And for people when I’m teaching people JavaScript sometimes I think it can be confusing trying to say array or object. But then people say aren’t arrays objects? So again I think maybe object isn’t the word we want to focus on here. So some suggestions I put down was that zip name, zip dictionary, zip from object. But actually I think the best one is KG’s suggestion which was zipKeyed. I think actually that’s much better than any of my suggestions, especially because key is already part of the javaScript vocabulary. We have object dot keys and `reflect.keys` and named and dictionary are maybe things that people talk about colloquially but we don’t use those words in the specification yet. Whereas zipKeyed I think fits in with the language we already have. And that is what I have. So I will open up the queue. And curious what people think we should stick with the current name or if they have suggestions for alternatives or if they want to +1 any of these? The current preference is the zipKeyed. I think Kevin has the best idea so far. + +MM: First of all, I very much like the idea of designing the operations and names on iterator for this purpose to be aligned with the corresponding operations and names for promise. There was a suggestion that I made a while back for `promise.all` which maybe it failed for reasons I don’t remember, but what I do remember is that a lot of people found it attractive and I think it would be attractive for this as well, which is to do—if the argument is neither an array nor an iterable, I suppose that means if the argument since arrays are iterables, if the argument is not an iterable, then you use the from object or this other keyed semantics. So you do one or the other depending on whether the argument is iterable or if you prefer, depending on the explicity if the argument is an array or iterable. But I prefer that. If that one is clear enough, which I think it is in practice, then I prefer that rather than adding a bunch of extra methods for this other dimension of variation. + +ACE: I remember you suggesting that I think it was back when we were in A Coruña in Spain. We talked about that. At first, I really liked the idea of doing it as an overload. But then I think—maybe a week afterward or something, I made a typo in the project where I forgot the `promise.all` takes one argument of the iterable rather than being far and I did `promise.all` for multiple arguments. If it was TypeScript it would have caught that. In JavaScript we don’t throw if you throw too many arguments to promise all and we just try to iterate the first thing and then it threw because the first thing wasn’t iterable. However, if we change the semantics to have an overload, then that typo sudden Ily becomes worse because it instead of throwing, it now tries to destructure the first thing. Now I’m fallen in disfavor of that even though I do like the idea of the overload. I think for it to work, we need to have to also validate people aren’t trying to pass more than one argument and that feels like something we don’t do today. + +MM: I think that’s an interesting case to raise. I certainly am sympathetic to not masking programmer bugs, not introducing changes that would mask programmer bugs. I’m not sure if it was on this topic, but I think it was where I actually suggested that in the other overload case, if there’s more than one argument, you throw exactly to catch the likely cases where this would be a bug. And I think that if that’s the only down side to doing it with an overload rather than multiplying the number of methods. We already have on promise, we have four methods on this category. And if we cannot multiply, that would be attractive. And then of course that applies as well if we end up with more than one variant of zip or if we end up with a more than one variant of async zip et cetera, if we can avoid multiplying methods where the universal uniform overload would do, I would find that attractive. + +KG: So we did discuss this specific topic. I just went and dug it out in the notes. Mark I know you’re not in the matrix. But it was the February 6th of this year is when we talked about the specific topic and got consensus on the current design of splitting out the methods after some discussion a number of people were really strongly in support of having separate methods for a variety of reasons including the fact that the shape of the output being different depending on the type of the input can be really confusing. So I prefer we just stick with the design that we previously had consensus on and only reopening the naming question. Also on the specific topic of multiplying the methods, it’s not as bad as you might think for a promise, because only all and all settled return arrays. You wouldn’t need to have like race or—you wouldn’t need to do this for race or any because those only return one value. + +MM: That’s a good point. + +MF: So I do want to apologize for not properly considering the cross-cutting concerns when choosing the name here. Originally having this named in the first place was inspired by your proposal. So we wouldn’t even be here without you. So I just wanted to say that as far as my preferences go on naming, I think that introducing the word dictionary might not be the best idea, it’s just a completely new concept. We've never used that term within the language. I know that it’s somewhat popular outside the language in the ecosystem. But I think we should probably stay away from that. So my preference would be named or keyed, those are both fine. + +WH: I just wanted to express a preference for `zipKeyed`. + +ACE: I think that looks like the queue. MF, are you happy with—do you want to ask if we can consensus for changing to zipKeyed? + +MF: Sure, yeah. Can we rename zipToObject to zipKeyed for joint iteration? + +MM: I want to explicitly note that if we do this, we should likely follow the parallelism suggested by this presentation for promise.all. Not that we need to vote on that. But we should keep in mind that’s probably what we’re going to do if we do it here. And on those grounds. I have no objection. + +MF: I agree. + +MM: Okay. You have some support. Chris Cole says I like `iterator.zip` and iterator.zipKeyed. CM says +1 to zipkeyed. + +CDA: Any objections? + +CDA: You have another +1 for zipkeyed from JWS. Now that I’m reading it there’s no end of message there. Did you want to speak? + +JWS: No. That was it. + +CDA: Okay. And still have not heard or seen any objections. All right. Sounds like we have consensus for zipkeyed. + +ACE: Thanks everyone. + +CDA: ACE, would you like to or could you please dictate a summary to the notes. + +### Speaker's Summary of Key Points + +- We discussed the context for the proposal’s initial naming of "zipToObject" +- ACE presented some reasons why this may not be the optimal name +- We also discussed that the naming here should be reflected back into the 'promise-await-dictionary' proposal + +### Conclusion + +- `Iterator.zipToObject` has been renamed to `Iterator.zipKeyed`. The proposal remains at stage 2.7. + +## Scrub of Stage 2 Proposals + +Presenter: Peter Klecha (PKA) + +- no proposal +- no slides + +CDA: All right, thank you. Moving right along. I think I forgot to update TCQ after updating the schedule. We have PKA, are you there? Scrub of Stage 2 proposals? + +PKA: Yes, I’m sure. + +CDA: Let me fix. What’s your GitHub user name again? + +PKA: Pklecha, I need to quit and reopen. I will be back in one second. + +CDA: Okay. TCQ is now fixed. Are you there PKA? You’re on mute. + +PKA: Yes, sorry. Turn it again to share. You can see my screen? + +CDA: Yes. + +PKA: So hi, I’m Peter from Bloomberg and here today to do a review of our Stage 2 proposals and i’m just going to apologize in advance if you hear my daughter at any point. The goal here is just to update the committee on the status of all of our Stage 2 proposals in particular we want to identify proposals that are stalled and maybe could benefit from new champions on additional champions proposals that have fallen unused and could be potentially removed from the active proposal list and to identify cases where proposals might be blocked. So what I’m looking for here in general is just for proposal authors and champions who are present to just give as brief as possible an update on the status, no pressure whatsoever, one line this proposal remains active and I’m working on it is totally acceptable. We just want to identify those other cases where the champion—where there’s blockers or stalls that the larger community may not be aware of. So this is a list of proposals that we have heard from recently which is to say in the past 12 months. So nothing more needs to be said about these proposals. The committee is up to date on them. And here we have the list of proposals that we have not heard from recently. I’m going to go through each of these one by one. As well as these three proposals I’m just noting I’m not aware that there are champions active in the committee. There are spatial. And begin with TimeZone Canocicalisation and I don’t know if JGT or rG is available to give a brief update on the proposal. + +RG: I’m here for this one. I believe it’s definitely not inactive. I believe it already got merged into temporal, but I don’t remember for sure. + +CDA: I’m on the queue. I’m pretty sure we talked about this more recently than May 2023. + +PKA: Sorry. I should also add that I got the dates here from the proposals repo and apologize if any information is inaccurate either in terms of the dates or the list of champions. + +CDA: I wonder if the last time we talked about it, it might have been as part of something else and then didn’t get updated. I mean, proposal repo is really good but if it’s mistaken, it’s not the first time that it did not get updated for something. PFC is on the queue. Go ahead pFC. + +PFC: I believe this already went to Stage 3. + +DE Try to track down the meeting where that happened and I will update the proposals list. + +PFC: I’m looking through the old agendas right now. + +PKA: Okay. Let’s go on to the next topic. + +CDA: It’s very much active, so I think we can – + +PKA: Great. + +PKA: Glad to get that updated. Next we have symbol predicates and I believe both the champions are here. Can anybody make a comment? + +CDA: This is from JHD? + +JHD: Yeah, it’s still—it’s both still active and also blocked. I have to—there are two predicates and one of them is not obstructed and the other one I need to come up with a more compelling case for to convince V8 in particular although there may be other engines that have concerns, so yeah. + +PKA: Great, thanks for the update. Next we have module declarations. + +NRO: This is like not recently because we have a lot of modules proposals going on. So we’re just like waiting for the one that had to stabilize before continuing with this one and m module expressions. + +DE: I think the plan is we will pick this up again, right? + +NRO: Yes. + +DE: Some interesting feedback from the committee would be if people feel like the way that we were previously going of using this kind of lexical name space but statically was acceptable. So if you have opinions about that, either shout them out now or come to some of the modules calls that we have on the TC39 calendar. I would like to go the way that was previously proposed where we—which NRO has fully written out a good specification for where it’s modules are lexically scoped variables but they’re available kind of at static semantics time the way that, you know, your let declarations are checked for duplicates. + +NRO: And if anybody wants to join the module summary meeting, please in advance. We can make sure that all interested people are available in the meeting. + +DE: Want to point out that the ES module source solves a lot of issues for this because these module declarations will be that type of object. + +PKA: Okay, great. I think that also qualifies as the update for module expressions, correct me if I’m wrong NRO? + +NRO: Yes. + +PKA: Great. So moving on, we have JSON.parseImmutable. + +ACE: So `JSON.parselmmutable` is like a sibling proposal to records and tuples and can only move forwards as quickly as that one can. And we did present—I’m not sure exactly when, but within the last six months or so on the records and tuples idea space. DE, feel free to add to that. + +DE: No, that sounds good. Additional input into the record and tuple idea space would be really welcome. I think we have a certain set of people who are involved and the rest of the committee we were hoping the last presentation would drive involvement and so that will really help. So I think that’s all for this topic. + +PKA: Great, thanks. Next we have string.dedent. Are either of the champions able to make a brief comment on the status of the proposal? + +JRL: [garbled] So you meant this sort of TC39? Can you hear me? + +PKA: The audio is bad. + +PKA: We can come back `toString.dedent` when Justin is able to fix the audio issue and go to destructive private fields that is also JRL so I would move on to RegExp buffer boundaries and RBN said he wouldn’t be here and this proposal is an active proposal on the backlog. Next we have the pipeline operator proposal. This would have been nice to hear from RBN about. I don’t know if anyone else feels competent to give an update on this proposal? + +NRO: I can just say that whenever I talk with the people at conference this is a proposal that they and the most about. And so as a committee we should probably figure something out. + +DE: Yeah I share NRO’s impression and that this is widely requested. So I think the in the previous discussion, there was you know there were ALTtive and hack version or the f-sharp version and the current one is the hack version, so it would be interesting to hear from people in committee if anyone disagrees with that vision? I would encourage people to voice opinions now briefly, if they have them. Or if feel like the pipeline operator is not worth it and add a JavaScript that would be good to have input and there is somebody from Bloomberg who expressed interest in getting involved here, and a lot of presenters here in TC39 that are wanting to get involved here. Bloomberg not igalia, sorry. + +MM: I am not sure in is the feedback you are looking for but I am in favor of this proposal and this proposal as proposed not the f-sharp one. + +DE: Yes that was the feedback I was looking for. + +CDA: DLM? + +DLM: This predates my involvement with the TC39 stuff but I know there is a general negative feeling about the pipeline operator. But because of there is conversations before my time and not quite sure about the reason or any specific part of proposal. + +CDA: All right. SYG? + +SYG: I am going to in detail now, but the pipeline operator is recognized the demand and to find the language and find a way without doing it supporting it in the engines, itself. + +DE: Can we go into a future meeting SYG? + +SYG: Yes planning on it. + +DE: Great. + +CDA: Okay, nothing else? Okay sorry, EAO? + +EAO: This is mostly just an aesthetic view, and I believe this pipeline syntax to be very noisy and i don’t like it. But this is just an aesthetic opinion, and I don’t know if we count on that very highly. + +MM: We count them highly. + +AKI: What is the difference between aesthetic and development ergonomics sometimes? + +CDA: None of the Champions are here, so shall we move on? + +PKA: Yes I will just note it would be nice to hear from the Champions on a status update on this at some point, whenever possible. Um, sorry next we have map.prototype.emplace and this is EPR and this person is not at the committee anymore but if anybody who knows about this, proposal or has any interest in contributing to it, speak now? + +JHD: Yeah I don’t have the bands width to Champion now but I think we should move forward but my understanding of the difficulty is that there is a like there is a few different semantics, and it is not yet clear how those can be cleanly represented in the API that is palatable to the broader committee. But yes if someone would like to Champion it, that would be amazing if at some point in the future, my plate might be clearer and I can Champion it but I would like to see it stay where it is for now. + +SYG: So a question, is this up cert. + +JHD: Yes it is. But renamed. + +DE: The names are kind of silly but I think this is a widely requested feature. There is a project i think in the planning stage with the University of Bergen students getting involves here, so it would be very useful to have any committee feedback on this organized for them so they can take up the proposal.. Any opinions or any cross reference to any opinions would be helpful? + +[There was confusion on whether the University of Bergen students would champion this proposal; DLM later clarified that he would work with them on championing it.] + +PKA: So dynamic import host adjustment, and so the Champion is KOT, not known to be active in the committee, if if anybody knows about this proposal, please share? + +NRO: This proposal has been transformed to a request and has had some concerns for a while. I think i might be—yes I was mistaken. I sample working with host types for a while but there is no updates on this proposal and not has been brought up with them. + +DE: Okay the author of this is, you know, in contact with a number of people in TC39 who are working on Trusted Types. Can we give someone the action to get in touch and see what everyone wants to do with this? + +NRO: I will attend to that. + +DE: Thank you for much Nicolo. + +PKA: Okay collection normalization? Anyone has status on this proposal or have any interest on taking it out? + +DE: ACE actually brought up this design space in his Records and Tuples presentation. It is linked but it can be done separately. + +JHD: This is one that as I recall was almost ready to go to Stage 3, the primary objection—there has opinion a compromise reached that the primary objecter was content with but the Champion was altered(?) with the compromise and so this is one I am much more likely to take on sooner assuming that is an accurate state of affairs. + +DE: JHD, What was the compromise? + +JHD: So, my understanding of the—my recollection is that the prime mare objection was the desire to treat a map and set agnostically so I can have a design function and does not care if it is a map or a set. If I remember the correct proposal and the compromise was that if you provide—that you can provide one of the two—I will have to refresh my memory but something that would allow for that use case, but in the majority case, it would also allow for—it would allow for both mental models that is a set has only values or a set has only keys as well that I want to be agnostic of map versus set and I can dig up more content in the future. But essentially that was the compromise as I recall. + +DE: Okay, great. Do you want to see if anyone wants to be a coChampion on you on this? + +JHD: I will come back on the next plenary after I have done some research because I need to be confident about how much work is involved and how much room I have on my plate before even Championing it. But I will leave it through until then at least. + +DE: Okay sounds good. + +PKA: Okay thanks I will circle back to JRL? + +JRL: About `String.dedent`: It was championed by PayPal, who is is no longer a member. There are no current blockers, I just have not written the test 262 test to get this 2.7. + +DE: You don't need test 262 tests for stage 2.7, you need that for stage 3. So let’s propose that for 2.7. + +JRL: I can do that for next meeting. + +DE: Do we have volunteers to help with the test? + +JRL: Sure. I have a test, I just don’t want to learn test262. I have a private—I have a library that implements string dedent. It is a complicated case because you cannot see the light space and you need to write these at fixture tests for these to make any sense, and if you try to write them directly in JavaScript in some fashion, they are terrible. + +DE: Oh I see. So what if you put that in the test 262 staging directory? For now? I don’t know if that is enough to get to Stage 3, but it would definitely or Stage 4 but it would help somebody else write the tests if you don’t have time to do so. + +JRL: Okay. + +CDA: Can we return brief to up upsert/emplace because MBH clarified in the delicates chat this is a group of students from the University of Bergen would be implementing it in several engines but a group of students implementing in engines a TC39 Champion does not make, so, it would still be great if we can someone to— + +JHD: I would caution trying to built implementation of things that are not at Stage 2.7 because that the change significantly. + +DE: The group is aware of this and many are coming to TC39 as well. And is because you have corrections? Or? + +DLM: Sorry, I wanted to say that I would be one of mentors for this and the implementation in SpiderMonkey + +AKI: MBH said that this is going to be a learning exercise for students and trying to bring them 2.7, and DLM, who was just speaking said they will be the mentor for SpiderMonkey for that implementation as part of process of student’s learning on how to put something into an engine? Is that clear? + +DLM: Good from my point. + +CDA: Okay that is great and I am happy to hear that is happening. And so not necessarily spend a bunch of time on this. But we would still be great to have a Champion for this proposal. + +DE: Yes it was—okay go ahead. Because it was already clarified that you know this group of people is going to be working on bringing it to committee. + +PKA: So our final item is destructuring private fields. + +JRL: I am no longer working on this although DE has a proposal that would assume it entirely. + +DE: I have a proposal for how we could handle private name declarations, and this proposal would make it clear that the current destructuring private fields proposal is OK. It makes it clear that the syntax space used in destructuring private fields won't clash with something else. This would then free up the destructure private fields syntax to what JRL proposed before. I would like to come back and present that sketch of this alternative feature. I don’t know if I will have energy to champion the proposal myself, but if folks are convinced about the direction, then we can proceed with what JRL was proposing before. Do you have thoughts on this? + +PKA: Um, no that sounds good. That becomes—we just need a Champion to resume as we move forward. But that is a great update. Um, I think unless there is another comments, I think that concludes the review. Thank you everyone. + +DE: Can I and is there any advice people have for future scrubs? Or direction? Or get in touch with PKA. + +CDA: Um, PKA, do you want to dictate a summary for the notes? + +PKA: I think I will just enter it into the notes just which proposals have which updates and I don’t think we heard that many proposals are immediately need of being scrapped but we heard a variety of different updates for proposal summaries and some energy from without order to continue. Others are proceeding just fine. And I will write something more detail and more legible in the notes myself. + +## Normative: Add text about locale installation in browser implementations as fingerprinting vector + +Presenter: Ben Allen (BAN) + +- [PR](https://github.com/tc39/ecma402/pull/780) +- [slides](https://notes.igalia.com/p/fingerprinting-slides#/) + +BAN: Okay let me share my screen. So this concerns a PR that is a work in-progress for quite some time though it is very little little text because it is touches on the something that is a differ contentious issue and touching on the fingerprinting which arises in interNational work. And this can be sensitive. So, the relevant PR, I think it is up on the agenda, is 780 and it adds what is in a wondering note because it is a normative change. Saying that specifically browser implementations can’t reveal certain information. Related to low conceptualize data and response to an issue 588 called "ships entire payload", and I will give context and history behind this, and so jumping back to 2021 or 2022 oh yeah 2021. Okay one second. + +BAN: So, this issue originally came up with the `Intl.Enumeration` proposal and when it came up, it was for respond to request to give all the locale on a system, and this can be a fingerprint risk because you can identify fingerprint because if it changes, you can identify specific users based on the locale of the install. And noted to avoid expanding the fingerprint purpose and browser cannot allow locale or related data to change as result as actions taken by the user or if they do allow this, information about the installed 0 locale and so forth cannot be exposed in a discoverable way. And I think I said this is something that readily became not a problem in `Intl.Enumeration` and the key thing here is while the problem with `Intl.Enumeration` and that the locale and that would identify some one. This is referred to as passive fingerprint and that is to identify someone without looking out of the order and so it was changed so the request is one by one and this is active fingerprint and you can still identify someone but because you are making all of these suspicious request people can see it and can tell that you are tracking someone. Did there is an internal Mozilla privacy and there is no differences between two users and using the brower version on the same platform irrelevant of their browser behavior patterns. So there is nothing you can do related to the locale or related locale data currencies and so using the same browser on the same user on the same platform. + +BAN: I believe I say this in the slides but for context of why is this important. If some unhigh thetacalcally and I will say this in the slides later and this is something that is not possible in the current browser implementation but if one would install a locale or data related to locale and it could not track you personally but to track whether or not if you are installing a not commonly used locale and you might be a number of potentially oppressed cultural or ethnic minorities and so this can be sensitive data. So this was the internal Mozilla privacy review and this is taken into account for the integration proposal but this is a problem that will affect more things. And as I mentioned, this is something that currently no web browser currently does and the web browsers has locale data and with that said, node.js has aloud you to do this for some time, a number of years because the problem of fingerprinting does not relate in that context. You know you are not revealing information potentially to a server. But, it would be very nice to be able to install new locales and be able to install new locale data and some cases more than nice. And if you look at the discussion on that issue, and on the poll request, there it is—I want to say it is the most active conversation I have seen on any issue or any pull request but I am sure people who has been around longer and has seen it longer. But for minority script users for example, Steven pointed this out, the default breaking for Thai script will cause problems, and you can install data, and you can make pages comprehensible, and respecting how you use the language but also, if servers are able to say to check and see oh, do you have this installed? Do you have this particular different breaking for thai scripting, they know you are a member of that minority group. So this is a privacy concern, and in a sense this can be a security concern. And going through the context of this before I get to what is ultimately like four lines of text. + +BAN: Um, another awkwardness is well this is something that only makes sense in the context of web browsers. So, is it even a concern for ECMAScript and the wording from the Mozilla statement is that you shouldn’t, if I recall correctly, no information should be experienced beyond the information beyond the region string and there is no concept of an user agent string in ECMAScript. Um, so the question arose on this where else do we handle it? It is yeah, so it could be an HTML problem or something like that. So, the unsatisfactory solution is the one sort of just taking the mozilla statement most straightforward saying okay, we actually can’t do this. Like Node.js can do this and browsers cannot do this and it is a privacy risk and security risk. It is something that the the w3c privacy interest group said this is a bad idea. With that being said with the comments from steven there is good international localization reasons to be able to do this in the future. Like I said, no browser allows you to do this now. But there is interest in allowing to do this in the future and it can improve localization but with a fairly severe privacy risk. + +BAN: Um, so, proposed solution that sort of evolved out of that long conversation on the public request and underlying issue, was well, okay, what if it is admissible to store new locale and now locale data but not permissible to reveal what what non-standards locales are installed on the system. So this comments from Google, arose out of a conversation with @Manishearth and proposed a issue the server asked if you have a specific locale installed, and it is—Okay, so if we are given server asks you due this low conceptualize or have this locale data installed and you say no, you can install it and that server, that origin knows that you got it installed. The problem is if another server asks hey, do you have this installed? And you say yes actually I got this installed, we are good. And that indicates that have accessed another server that has requested that you install it. Which there by identifies that oh you are someone who likely—you are 134 one who has installed this. So likely you are in a group which is relevant, and then you can identify it as a member of this group and sensitive information that can actually be dangerous. So the proposal is okay, you never actually reveal that you have something installed. Instead, if they say oh we want to use this and if you have not gotten installed on a different server that was not installed in the first place, and they say do you have this installed, you don’t say yes. If you have not, you download Intl. And if you have not we proposed a method to say okay we will pretend that we have downloaded this. And as a note, there was something I was going to lookup but I didn’t. And the reason to do this the ECMAScript is because of the locale proposal and it can meet through javaScript so that is our problem in a way. Okay so, I will flip over to the actual text. This is then if you follow, if you look at the conversation this has been very extensively workshopped. + +BAN: So this is the change with all of that contextual historical preamble of going back to 2021 and this is the change. I will allow people to read that for a second. I will not read it out loud. I will go back to the slides now and one wording that came up, okay, so this is a specially more web browser implementation, and so maybe it should be applied to something like—this is to web browser implementation. Or sort of like abnormality tract wording and it makes no difference in practice right now. It should be restricted to systems where fingerprinting is concerned such as web browser and that is one small wording question. + +CDA: There is a point of order from Dan asking to bring that up. + +BAN: This is sort of like a sketch of a prototype API and so everything related to exactly how browsers will reframe from exposing a bit given a low conceptualize installed, and all of that normative and all of that is reflective okay whatever the best method of doing this is. And so it is a normative thing. The normative thing is to say okay you can’t reveal this. And the method of avoiding revealing this is left to the implementation. + +DE: Are we talking about `navigator.locale` or the default local that is in Intl.? + +BAN: Locale info + +DE: What API in locale info is that exposed in? + +BAN: That is a good question because I have not looked closely. + +DE: I don’t think it tells you your locale but just information about a locale. + +BAN: Okay so not pertinent to locale on your system but just information on a locale. + +DE: I guess it depends on what is installed. + +BAN: Exactly. + +DE: Can you bring up the API that lists out the installed locales so we can see it? + +BAN: That is a good question, give me one second for that. + +DE: I think these are about query parts of a locale and the place where this discussion came up for me previously was in a `navigator.locale` feature which will give you a list of locales that are in like together with preferences. So, um, is that what you were thinking of with this? Or? + +BAN: My research was focusing on Intl.Enumeration. + +DE: Oh okay enumeration. In this proposal? + +BEN: That is a fine question. + +NRO: It is its own on proposal. + +SFC: Um, yeah this came up originally with the fluent iteration which this is not of locales but a lot of the other entities that exists such as currencies, and calendar steps and measurable p unit and things like that but we don’t—I think closest that we have to supportive locales is that the supportive locale of on function and that will give a list and force you out of that and in principle and you can have supportedLocaleOf with all possible locale and what support of that locales of that function will call. + +DE: Right, so I am confused by this presentation because I thought all browsers adopted the strategy of having a fix set of locales that are there. That is invariant across what you—so you want it upped. + +BAN: That is current behavior, and what we like it do is that browser can have dynamic locale and locale data so long As it does not reveal servers that has been installed. And something for the future. + +DE: All the API’s async, what is API are you suggesting would allow this? + +BAN: I am not suggesting an API and simply noting if you allow this and if you allow dynamic demonstration of locale data, you do not locate by whatever means. + +DE: Okay thanks for the context. + +BAN: It is literally this text. Saying okay, don’t do it, don’t reveal it, it is already installed with like dedictating any method of how it is not already installed or any method of not revealing that it is not already installed. + +MM: I got a clarified question, and several times you used the phrase is privacy concern and it is a security concern. And I don’t know if what you meant by that is that there is some additional security concern beyond just privacy? + +BAN: Um, part security not being in the sense of compromising but in the security in compromising your personal safety and this is across a channel that can reveal information about you. + +MM: So there is an isolation of communication concern here in addition to the privacy concern or is the concern specifically because the privacy issue? + +BAN: The concern is the privacy issue. + +MM: Okay that is fine and certainly consider privacy to be a security. Just wanted to clarify that. Let me suggest that you take this discussion to TG3 would be interested in and this is the kind of thing that TG3 is here to discuss. And it is only other thing is that as you brought up, if I understand you are suggesting that the language you show for browsers be normative, correct? + +BAN: Yes it has to be normative. Or well one—it could be that this part is normative and this part which is some versions of it separate paragraph this could be a node. So definitely normative and this is the clarification. + +MM: Okay I will try to leave a comments on the second part of that. But with regard to the generalization to platforms that concerned about fingerprinting, perhaps that generalization could also be attached to this normative text as a nonnormative note because there to be normative with regard to a not well-defined category is a little bit weird. + +BAN: Cool. And also thank you for that bringing it up to TG3, that would be very useful. + +MLS: So on Mark’s comments and I support the margin error fingerprinting is an issue on one browser and I think that is more forward looking and we will have to go back and change it later but that will make it more muddy as to what systems that is. And I guess that would be up to the implementation to choose to do that but I favor that over the web browser. + +JRL: Okay, so I understand the suggestion, but I don’t understand how it can ever be implemented in practice. If you have a locale that is installed on your system, there must be some observable output of that locale that you can use to detect if it is installed. And if it is not installed in the system, then you could detect that it is not installed because that output has changed. If you are doing silently install in the background and there is a timing attack because it is obviously this locale was not installed. + +BAN: Yes, and let me see if I can find this suggestion. Okay, um, so, with that, essentially, it is—if I remember it correctly, give me a second. It basically involves user promise, so if it has been installed it waits for a second, and then responds as if it were downloading or as if it were installing the locale. + +JRL: But this is extending on network conditions, and you can wait for an arbitrary amount of time and you have no idea of how long the user would take it to install, you can wait for random amount of time, maybe. But I cannot see this working this a situation where we know that submilli second per timer allow you to detect state. And this multisecond long installation process does not allow you to detect state. + +BAN: So, here is the comments that might be useful. So yes, there is definitely like side channel methods of telling whether or not it has been installed. And there is sort of like best practice with fingerprinting and that is not to make it not possible, especially in the internalization context where you are revealing information about yourself. And the best practice is to make it relatively harder. And to make it such that if someone is doing it, or if someone is requesting information that is useful for personal fingerprinting that is detectable, so it's not passive but it is active fingerprinting. So yes there is going to be methods—this closes off the easiest methods to determine whether or not you have the locale installed. It is not necessarily demanding like—how do you put it? Um, the concern is imposed to reduce the fingerprinting risk. So the wording—so enumerable items is not necessarily like there is tricky ways of timing in text to make it enumerable and so it is mitigation and something we don’t want to just give up that information in a JavaScript. + +SFC: I want to emphasize that this solution is not relevant to this back text that we are trying to achieve consensus for today, and it is just illustrative of the type—of a type of solution that might be conform answer with it but the status quo write now that is browser can conform to the first sentence which is about the basically the fingerprint of all enumerable items and this being equivalent attributes of the user agents string but this is us writing it down and this is something we discussed at length the enumeration proposal, and the committee proposal was contingent on us taking efforts to write down this invariant that is currently a variety. That is the final goal of this change is to write down that invariants. But we would like to event move the direction of supporting dynamic locale or something like that and this is where sort of starting from the second sentence after "furthermore,", that section of the text is focused on well, you knee we would like to eventually move in the direction of support global and here is some guidelines of when this happens and the teens of constraints that such a mechanism would need to abide by. So that is all that we are really asking for here. Like I think that you know when actually have a concrete proposal, we will definitely come back to TC39 and discuss the in’s and out’s of that proposal and I want to emphasize the specific thing that is proposed not something that we are asking for consensus on today. + +BAN: The API is not something we are talking about. And I want to note that in one of the earlier versions of this, I might have mentioned this but in the earlier versions that what I highlighted there was considered normative and all of the stuff below, all the explanations for reasons for wanting this, was a note. So normative and potentially normative. + +JRL: I will comment that on the causal solution and you can imagine how you perceive this, and can you do the download and throw the download away if you really wanted to be private. + +BAN: That is not like an ideal situation because it is introduced for specific people who have these locales but that is a possible solution. + +JRL: I can imagine—like pretend there is different companies and a hypothetical food company had a primitive node, they might do like download in that node where you really the server will be as secret as possible or safest as possible. + +BAN: One of the things I think that came up on the discussion is the lock down node, that would be considered a different browser or a different platform. So yeah. + +SYG: This seems to me but I don’t make the privacy decisions for something normaltive in a web platform in Chrome, I would need to bring this particular verbiage back to Chrome privacy. I appreciate the flexibility that you try to build into a lab browser and to have dynamic downloads, and I want to make sure that our privacy team thinks this is kosher before we merge it into 402. + +KML: There was a plus one on that SA inferiority side and one possibility is to allow, for supporting but call out browsers as one that mandates but not permitted to. And like any implementation in a privacy setting including the browsers or something like that. + +BAN: The privacy to conscious wording, that is very nice. Just to respond to earlier point, I am thinking about the discussions we had related and like just download it, and I believe that might be in the context of some like languages and reaction and with a large amount of data. + +KML: So I was saying that as mandatory thing as an option in terms of time thing and who really has the authority to mandate it. + +BAN: And any mandate of a method of locale related data is outside of our scope. + +SYG: Has it been discussed of pro’s and con’s of leaving it completely to the host? I have not read the entire thread, but like HTML spec itself as imbedded JavaScript have some sentences here and there about reducing fingerprinting vectors or avoiding producing new ones why do this at this layer whether that who ever just imbeds JS? + +BAN: It is because we can potentially the ones that are doing that and because there are related to ongoing proposals. There’s stuff that just we are the only ones that can do it. + +SYG: I don’t think I understand, I think you need to go into more detail. + +BAN: So, to support locales of JavaScript could end up specifically JavaScript could end revealing what nonstandard locale and locale and I don’t think that can be handled anywhere but here. + +NRO: I have the same question earlier and I checked and for define implementation defined placed with available locale which is what this request is about. Where at least a data these are not defined anywhere in HTML, there is not hook that HTML defines what they are and so these—this is entirely saying not well-defined within 402 but defining it the best and at least mention that this at least exists. + +KML: I guess I think what he is trying to say and I think instead of put thing text in here and the host effectively injects of the requirements of the host. It is an abstract host, – + +SYG: There would be something of like the host may do more stuff here like forbid certain things from happening, instead of us workshopping the language exactly today or, or workshopping the language at all, I suppose and let the host do something. Like we would insert a patch point, basically that says, hey here is a point where the host might do something special, including trying to not reveal more information. Am I not really advocating for that because I am not doing the work and I am asking if there has been discussion of that approach? Because that will let you side step all of this, right? + +BAN: There is not as far as I know any discussion of that approach. Not entire what the HTML said. + +SYG: If the immediate folks who—the immediate stakeholders who care about is the browsers and pushes on to WC3 and s0 figure out the language there, and for the node to figure out the language et cetera. + +SFC: I was just going to say that I think some of the feedback that you and Keith shared is useful and I don’t think—I mean this is a very long thread and a very long discussion as you can see, but I don’t know if there is particular approaches necessarily released. About the host or something along those lines. Despite the very long conversation on this request. So, in efforts of you know, like come out of this agenda item with like you know a productive next steps I was wondering if there is a way to like if we can established like who is a list of stakeholders we should engaging with who might have concerns about this. And who might have ideas about what are better ways to approach this problem, and you know, ideally related problems. Like, who are the people we should invite to incubate our calls for instance so we can fold here because I hear people say throwing around things and WC3 and these people and those people. And you know, it would be very nice if we can actually like established here is the people we need to talk to. You know, and yeah, so we can actually take action on this. + +BAN: Yeah, it is toward the end, and I can bring it to the interadditional group and to the browsers if necessary. + +SYG: My read of situation is that Mozilla brought up fingerprinting concern, and then you are trying to engineer some language that leaves enough flexibility to do future things, but also satisfy their concern. That means the people who want a solution is Mozilla, and if they want to interrupt, they want Safari and Chrome to agree, so this is something with who has not been involved has not been involved and the immediate stakeholders are the browser privacy team and all you really need is to build something into 402 that they can then debate offline and then put whatever they agree on. Like it seems like you are taking on more responsibility than you need, is that a fair characterization? + +BAN: Oh, um, possibly. I mean I personally enjoy working on fingerprinting related issues. But yes, um, when you say put something in 402 that you with work with are you saying add something like this and actually add it to the spec? + +SYG: I mean along the previous line of "here is a point where a particular implementation of a 402 and javaScript can insert additional constraints". Because I mean if you are asking for consensus on the substantive text of like what you have on the screen right now, I think it is clear to me that we don’t have the right stakeholders in this room to—yeah. + +BAN: Yeah, I agree that is definitely premature. + +SFC: We are just about out of time, and I will add one more other comments which I have not heard from Dan Minor, and anybody from Mozilla in discussion this is directly in response to those comments that they raised and I did want to ask one like is this basically in the like solve our problem but two, do we really need this text? Because like the status quo is that there is no problem with fingerprinting and it is only that like there could be in the future. So like, do we really need this spec text at all? Because if we can close the issue, that would be a resolution to this topic. That we have been sort of working on for a couple of years now. + +BAN: Yeah, I think the two determinants things are one, Mozilla’s feedback, and two how much pressure there or how much immediate pressing there is to actually allow for locales. + +DLM: Let me check with YSV about that and she was in the original discussion. I guess I should end there, and personally given that no browsers allow dynamic changes in the locale and not something we have to worry about rights now, and maybe in the future we will see more circuit availability like that and I would like to double check to see what she would originally had her concern. The spec that is written right now, it seems fine to us that we discussed, but she thinks of the privacy should be involved one last time. + +CDA: We are at time. Ben can you please dictate summary for the notes and conclude. + +BAN: I was going to say it is great that we can open up a conversation and get this work done in 30 minutes because it is opening pandemic Dora’s box. And thank you for your feedback and your assistance. + +### Speaker's Summary of Key Points + +We need to reach out to relevant stakeholders including privacy keys on this language, and on the necessity of this language in the first place. Whether or not this is necessary. Whether this normative text is necessary rather. diff --git a/meetings/2024-07/july-31.md b/meetings/2024-07/july-31.md new file mode 100644 index 00000000..1a930153 --- /dev/null +++ b/meetings/2024-07/july-31.md @@ -0,0 +1,1113 @@ +# 103rd TC39 Meeting | 30th July 2024 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|-----------------| +| Waldemar Horwat | WH | Invited Expert | +| Dmitry Makhnev | DJM | JetBrains | +| Ben Allen | BAN | Igalia | +| Chris de Almeida | CDA | IBM | +| Jesse Alama | JMN | Igalia | +| Nicolò Ribaudo | NRO | Igalia | +| Eemeli Aro | EAO | Mozilla | +| Michael Saboff | MLS | Apple | +| Philip Chimento | PFC | Igalia | +| Jordan Harband | JHD | HeroDevs | +| Justin Ridgewell | JRL | Google | +| Keith Miller | KM | Apple | +| Istvan Sebestyen | IS | Ecma | +| Samina Husain | SHN | Ecma | +| Mikhail Barash | MBH | Univ. of Bergen | +| Aki Braun | AKI | Ecma | + +USA: Yeah it is time, hello everyone. And well, good morning, good afternoon, good evening. Um that is probably not good afternoon anywhere anyhow. Um, welcome to the third day of the meeting. Before we start with the topic that you already probably see on your screen let’s start by concurrency for help in note-taking? There is 18 of us which means that I hope people can do this over time but let’s start asking for help with notes. So who would like to help out today? As you may offer from yesterday’s to start? Anyone I hear it is really nice. And fun and team work, I don’t know. Like please help us. All right there is 25 of us now. So, we might be able to begin as soon as we can get somebody to help out the notes? If any of you amazing people would like to help us for this session? Just two hours. Or maybe part of the session. + +BAN: I can do the first hour but I will step out after then. + +USA: Thank you Ben for always doing notes. Although I would say it would be quite nice to see somebody who has never taken notes before, help out with that. I think it’s fair for all of us to share that responsibility, and I promise fun. But, um, yeah. Oh while we try to convince somebody to take notes now—I hear someone? Well I am sorry for the delay but let’s just wait a bit until somebody agrees to help out Ben for the first hour. + +USA: Thank you. Perfect. Thank you Chengzhong you should begin. + +## Propagate active ScriptOrModule with JobCallback Record + +Presenter: Chengzhong Wu (CZW) + +- [PR](https://github.com/tc39/ecma262/pull/3195) +- [slides](https://docs.google.com/presentation/d/1FQNSpCdzkvcRg-yFBjUOqjrNAVHezfoVLjgK4cfEjIc/edit#slide=id.p) + +CZW: Thank you, this is CZW from Bloomberg and I will share the change in ECMA-262, and the pull request number is 3195, and let’s begin with it. And the problem here is that the host defined hook has a comments for host to present this quick manager and the abstract module when the host invokes the HostMakeJobCallback when the promise is fulfilled. However, the HTML depends full hee implement this comment we start which now defines that the host make JobCallback so the callback module and instead of the hostEnqueuePromiseJob. And the difference here is quite trivial, but it is observable. So the HostEnqueuePromiseJob hook may depend on the internal promise state which means the timing of the HostEnqueuePromiseJob depends on whether or not the promise is already fulfilled. + +CZW: So when `promise.then` is called before the promise is fulfilled, then the HostEnqueuePromiseJob will not be invoked immediately, and instead it will be invoked when the promise is actually resolved. So in this case, we can see that in the example, in the module A, we called `promise.then` promise before it was resolved, and the HostEnqueuePromiseJob is not called immediately, and in module B, the resolve function for the promise is invoked and now the HostEnqueuePromiseJob is called for HostEnquePromiseJob and then we have the hash joiner and so 2, and host enqueue promise job is invoked immediately. So we can see the difference here is that the ActiveScriptOrModule for the site 1 and site 2 can be different for the timing when the HostEnqueuePromiseJob is invoked. And the proposed change is that making the enqueued promise job’s ActiveScriptOrModule can be the one in which one consistently which one is when `promise.then` is called. So, this matches what is defined at the moment. HTML spec, and for example, the two examples here will behave identically. And the left example is a bind in the frame location server and the right is a JavaScript user code with equivalent behavior. So the ECMA262 those two code—um, sorry. Yeah, this is another relative problem that is not related to ECMA262 or HTML spec, but we expect the two examples here to behave identically. And what’s reality is that we have a matrix of different behaviors on the HTML spec and the ECMA262 and Chrome, Firefox, Safari all behave differently in all of these cases. + +CZW: So ECMA262 and Safari behave identically, reveals the promise state. And which means Safari is compliant with ECMA262. But Chrome and FireFox don’t respect either HTML spec or Ecma 262 and it behaves differently for the bound function and an arrow function. And with the HTML spec, it doesn’t reveal the internal HTML promise state that all two examples that will behave identically, so HTML spec has these problems. So the proposed change would be that we want the bound function and the arrow function to behave identically, to not reveal the internal slot [[PromiseState]], and that is all of the proposed change. And we can go to the queue for consensus of the change. + +USA: Yeah, so first on the queue we have JRL? + +JRL: Yeah, perfect can you hear me? + +JRL: Yeah a little mic icon was not changing based on my voice, perfect, so on the slide, so to reveal the internal promise state spot? + +CZW: The first it will be a zalgo issue because it depends on the timing of when the callback will be invoked but in actually it will be like this example like when—well with pure promise I guess the promise state will not be observable—all the promise handlers are invoked in the microtask queue, but if the host captures the ActiveScriptOrModule, it will be observable through the ActiveScriptOrModule. + +JRL: So you are able to tell whether the promise was resolved or pending based on the behaviors if the function get called? + +CZW: Yeah. + +JRL: Okay. + +USA: Next on the queue we have KKL? + +KKL: Yeah, I wanted to and for clarification. What are all of the observable from the language point of view? The observable effects of ActiveScriptOrModule? Do they include the referrer specifier for dynamic import for function constructed with the Function constructor or is this directed out? And I ask this out because this touches upon an aspect of dynamic scoping in the language that maybe there are better solutions to. So yeah, can I ask for clarification on that point? + +CZW: Um, the short answer is this will be in the next discussion, and in this change we want to focus on the HTML spec, because in HTML spec there is functionality that depends indirectly on the ActiveScriptOrModule, because HTML like the location depend on the script have execution context so I like the dynamic import discussion to the next eval discussion and we will focus on that tomorrow on that spec. + +KKL: Are the solutions orthogonal? + +CZW: Yeah. I can defer to NRO? + +NRO: So assuming that the next eval discussion does not have consensus, then this change has several behaviors for dynamic import, and specifically you can do— you can have a promise that results with a string that performs dynamic import and the promise you can dot then eval. And then if the promise is in resolved and the dynamic specifier would be resulted to the promise is then and if the promise is not yet and the dynamic import will be directed to the file that results in promise that is where the descriptor module the referrer comes from. + +CZW: I know that depends if the promise is there or not. + +NRO: And if the next discussion on eval will have get consensus, then this proposal change now does not have any effect that that is observable with ECMA-262, it is only observable with HTML features and HTML— + +CZW: I think if the discussion was not concluded, the change here will make it more compatible state but when `promise.then` is invoked than when the resolve is invoked. + +NRO: Chris does that answer your question? + +KKL: I am still pretty muddled. My first impression is that it is difficult to answer consensus on this issue without first understanding the Implications on the issue next on the agenda. And that my intuition is that the behavior of eval with the ActiveScriptOrModule for an indirect eval should depend on the instance of the eval function and the realm it is created and not depend upon any dynamic scope in general. + +USA: We already did the reply by NRO so that is it on the queue. + +NRO: I wonder if maybe it makes sense to come back to this after the other discussion, and then after the discussion we can come back to this and see if we can reach consensus here. + +KKL: Yes that sounds good. + +USA: Um, next we have a topic by MM? + +MM: Yes, we should come back to this after covering the eval because this cannot be disentangled. + +USA: and next we have KKL a second consensus after the next topic. And yup, that is that. + +NRO: So we will have time to come back to this. + +USA: Well how long of this presentation is left? Or do you want to just move the rest? + +MM: Certainly a decision on consensus needs to wait until we cover the next topic. + +USA: Yeah, well I mean for instance, this particular topic in total has 15 minutes left, so if we just defer the rest to later we can—yeah we have 15 minutes at least. Do you think that would be sufficient for that discussion CZW? + +CZW: Yeah I think so, and I can have a further discussion until we have the eval discussion. + +USA: Okay great, so let’s this end and we can come back to it after the eval topic. + +CZW: Thank you. + +### Conclusion + +This topic is revisited later in the day. + +## Decimal for Stage 2 + +Presenter: Jesse Alama (JMN) + +- [proposal](https://github.com/tc39/proposal-decimal) +- [slides](https://notes.igalia.com/p/proposal-decimal-tc39-july-2024) + +JMN: Great. Um, yeah, this is Jesse alama coming from Igalia working on this with Bloomberg and looking forward to decimal in the future. Just as a quick reminder and I know many of have seen this a number of types and I have been a bunch of times before and surely there is someone new here, decimal is about adding exact decimal numbers to JS. And the purpose of doing that is to eliminate, or at a minimum significantly reduce rounding errors of numbers, that is encountered with binary floats and human and data and that is fuzzy but the main example is money and another example is that you can see items on a graph tick label or coins with numeric labels and this kind of things. + +JMN: Here is a simple example. Just to give you a sense of space that we are operating in and calculating a bill. This is something where depending on the complexity of the case, and the numbers involved, we may see rounding errors if we do it in straightforward way in binary floats and JS. And here we have a new decimal128 class here that is being proposed and we have a calculate bill function here that is just it rates simple list of items with count and price, you can see that below as an example. And some kind of tax where it will get applied to the int and we do our calculation and we round to two decimal places possibly with forward to see if we do rounding and get some kind of result there. And the idea is that this are not binary floats, but decimal numbers. So you might think of this intuitively of a calculator type semantics and this comes in decimal numbers like we know and love from elementary school. + +JMN: Main reason for this presentation to give you a dif of what was going on since last time. And I am happy to report that WH and I have been working intensely, and WH is a co-author of this proposal. And working with WH the spec text has been considerably flushed out if you want to see some the details you can look at some of those issues and we can also look at the spec text here, and we have a definition of the space of decimal numbers that we are working with here. And we propose some rounding here, and we have some definition of what counts and definition of value and the space of possibilities here is limited and there is a limit to significant digits that can be used and a limit to the exponents that can be used. And we have some additional definitions here, and you see that there is some NaN here and you can see 0 and negative 0. And those I can make bigger. And we have some errors here and some simple arithmetic like addition and subtraction and so on. There have been some discussions about the API of this thing. And not too much to say on that front, although a couple of items are worth mentioning. We have our toString, and we are trying to get that to like `Number.toString` which switches notation from decimal to exponential notation if the number is too fine or too many decimal digits involved. And we have decided on adding a rich array of comparisons here. And the difficulty that we anticipate is that many developers will have some trouble with existence of NaN here and just less than might actually be correct. And in previously presentations we had a single cmp or compare or a method that we decided to up it up and have some comparisons developers are more likely to expect. And there is a notion of a quantum of a decimal and the representation of decimal the represent you might say digits after the decimal point involved, and so the scale precision . And I am going in circles trying to explain this but I hope you can see where I am getting at. And we are working with JSON here and for the moment we decided to throw when decimal are encountered, and this is open for discussion and we are making this parallel to big and it shows up in JSON string file. And we have been working a bit on the Intl side of things and there is some bugs detected and incompleteness of the NumberFormat and pluralRules and those have been set but the discussion remains open and the bugs and nor Matt was identified and there has been some notes on the PR’s later on if you like to look at the details of that. + +JMN: But the main purpose of this presentation is to get at a point that was made explicit last time in Helsinki, about whether decimals are kind of an exotic concept that is generally not used out there. And the question is, is there any new uptake for decimals that we see out there in the JS ecosystem? And this might be a reasonable claim, you can say decimal is a fringe concept that might have uptake a while ago but has low uptake today, and there is not much momentum so-to-speak or very little demand for these kinds of things. And I will go through a bit of an argument for why this is not quite right. And what we can see that decimals do exist. They are out there. And issues with them keep coming up. Of course, decimals as they exist today take the form of libraries, and so what we see mainly is difficulties with decimals as a library that we hope would be smoothed over if decimals were to exist in the language. + +JMN: So just some basic usage stats. In early presentations we have discussed there are only a handful of decimal libraries out there, by one person and say small community of people assisting little bit here and there. There is decimal.js and big.js and bignumber.js, and decimal.js-light. And there is a fair amount of use of NPM, and this will reflect and these are still being used, and obviously, a lot of these numbers will be indirect usages and developers are not knowing to cool these things down but they show that they do show up some ways and so they are sitting in a fairly thick middle, you might say, of the JS ecosystem at the bottom and underlying a lot of tools and libraries. + +JMN: There are some issues with the current decimal libraries out there. As you might imagine, the slight details of how these decimals are implemented in those libraries that I just talked about in the previous slide, can lead to some compatibility issues for users and library developers. So, I just want to give out a highlight of some of those issues that can happen and some of the friction that exist between these different libraries. + +JMN: One difficulty is that decimal conversion can be hard. Here is a nice issue that I found here, about working with JS numbers, and decimals. So difficulty is that there is the Mongoose decimal database the is using a decimal type internally, and there is a JS interface to this. But usually developers will work with numbers at the end of the day. So they need support from both and so there can be some issues there here. And the developers complaining about indirect cast or unexpected cast to number. And this is bad because the whole point of decimal numbers is that we are supposed to have a kind of a rich data structure that expresses all of the decimal digits, and if you convert that to numbers most of the point of using decimals is gone, so we need to make sure that is in our control. + +JMN: There could be multidecimal set-ups, and by the way notice the dates on some of these issues are still fairly recent. Here is one developer complaining about too many BigNumber libraries being used. And that the text goes on, and there is a link if you want to see it for further discussion. But the idea is that there are these handful of options out there, and BigNumber, and decimal.js and so on, but in some applications many of them can get combined and so we have multiple operations of decimal in a single app or library. Um, and in some parts of the JS world there is an increasing appreciation for reducing dependencies and some part that is not that much of an issue but some communities there is demand for this. And reducing package size and dependency count can matter and some developers are using decimal indirectly, but when they look for libraries, they can show up. And some of these libraries are not tiny. And so the decimal.js download is 283K unpacked. One idea that showed up in one place which was the idea of dropping decimals. And the idea is to plug in your own library. And this is a bit of a radical suggestion but you can appreciate the developer pain this is coming from. And the developer pain is something like you know, we are basically just using decimals as strings, and they are present and don’t throw them away but this is something thin here and this is something that a library is probably not going to provide. And going in the single line of thinking is something about barely used decimals. And so you see a number of issues on GitHub like remove decimal, or minimize decimals, or a lightweight version of decimals, and here a developer is complaining that the bundle is too big and you do a tight check and do a couple of methods where JS will create a lot of support there and a lot of mathematical functions and so on. So that one is going far beyond what we propose for our decimal proposal which is just basic decimal arithmetic. + +JMN: So just to recap here, we have had this discussion before, that having decimals today as a primitive is too much of a heavy ask for the JS engine implementers, and that is an understandable position. But nonetheless we have seen that there is a documented need for decimals out there. Many developers need decimal numbers for things like money or measurements, and they do have alternatives out there in the form of libraries but there could be some friction out there. And they add weight to these things. And I would say that having some kind of well-specified version of decimal, which is something that a library cannot really add unless the author were to take considerable trouble to say, verify this or ask someone for review. And the point of having a well-specified lightweight decimal library would create a coordination point. And I would assume many developers would drop decimal stuff there, and basically use our stuff out the box, and if they need to go beyond, we expect that would be appreciated a lot by the community. And nonetheless, and I want to say we emphasize again that we are keeping the door open to a world where decimals do exist as primitives, and we are doing what we can there. And there might be some kind of funny edges there of that world but that might be a nice world and turn to include them in the future. Um, and I think there is an interesting task for awareness of decimal in numbers and if this does exist we can increase this for decimals and the need for them and how they exist out of the box. + +JMN: So that is it and I did not want to make that too much of a heavy weight argument, and I hope the argument is straightforward and relatively small, and recap where we are with the decimal proposal, we have seen that there is a number of use cases out there, and documented a number of these. Both in our own developer experience, as well as a survey that we have presented here in plenary once before. And there is a bunch of ways to represent decimals, but we have chosen one that is fast and not too complex. And we have API that meets a wide array of new cases and the spec text is there, and ready for inspection, and ready to take a look at, and there is a polyfill available, and we have some Intl integration and we have that PluralRules and NumberFormat, and thank you by the way SFC for all of your help and getting those details ready + +JMN: A couple of next step and there is a couple of missing things. For instance, add toLocaleStrings discussed recently and there is an issue of BigInt and JSON.stringify, and there is AO spec that needs to be fixed and thanks for picking that out SFC, and there is recent feedback from WH and thank you for that, and that is basically done. And there is an issue in the issue tracker, and admittedly that thing is a bit—we need effort there and temporal we will use an example of using labels and these GitHub features that have so far been largely ignored. And then there is also discussion about trailing zeros in toString and JSON, and that is open discussion, but in our view that is not necessarily a blocker for going forward. And that is basically it, I will look in the queue for discussion. How are we doing on time? + +CDA: We have plenty of time, JHD please go? + +JHD: This one is clarifying question, so you use the term quantum and I I have tried to look at some issues in PR. I understanding it is meaning in an English or physics context but can you tell me what it means in decimal context? + +JMN: Yeah I believe that is bit of a curious terminology and I myself am not a big fan of that term and that is in the IEEE spec. But quantum for us this in context means essentially the exponent to which you need to raise the number by itself to get an integer. Think of something like 1.2, and you would need to multiply that by 10 to get 12, and the quantum there would be -1. So you want to have an extra 0, that quantum would be -2. + +JHD: That is a number of places to move the decimal to get to an integer? + +JMN: Yeah. + +JHD: Okay that is relatively simple explanation but I am concerned about the guaranteed confusion from the term, but that does answer my question, thank you. + +JMN: Yes good question, thank you. + +CDA: WH you had a reply? + +WH: I might be able to provide a better answer. In IEEE binary floating-point numbers there is one only representation of 1,000. So 1,000 has a unique binary representation. In IEEE decimal floating-point there are several representations of the number of 1,000. They are all equal but can be distinguished via obscure operations. They differ in the internal exponent vs the internal mantissa (not user-visible except via obscure operations), so you can have 1,000 with a precision of 1, 1,000 with a precision of 2, with a precision of 3, and 4 so on, up to 1,000 with a precision of 34. + +CDA: All right, and thank you WH. I am next on the queue, I am happy to see this increased collaboration with WH, and that he has joined the champions group. + +JMN: It is really great to work with WH and this is in good shape. + +CDA: WH is next on the queue. + +WH: One of the things that came up which I would like to get a feel from others of the committee on, is whether *equals*, *lessThan*, *lessThanOrEqual*, etc. should return booleans or tri-state values of true, false, or undefined? The current design returns tri-state values, but I’m not sure how I feel about that. + +JMN: I think the thinking or justification there, it is not a very good strong justification, is that the NaNs would not be comparable. And so undefined is at least a possible value but then of course another approach is just to bail out and return false at any time. + +WH: Yeah, returning tri-state values works for 5 out the 6 but will give you incorrect results for *notEquals*. + +JMN: Yup. And so— + +MM: So, since these things are not the symbols, because they are only named methods and therefore, it is up to us to—it says we just don’t have just a single cmp method, so why not a single cmp method with a four value output, and so less than 0 equals would ambiguously deal with this in a straightforward way. + +WH: We do have a *compare* method that does exactly that. The reason I also want *equals* and *lessThan*, etc. methods is that those are very common operations, and making everybody compare decimals for equality by calling *compare* and comparing the result with a Number is just a recipe for typos and bugs. + +JMN: The presentation might have been misleading. *compare* is there but these things are also there. + +MM: Okay. + +CDA: Mark? + +MM: That is it for me. + +CDA: DE? + +DE: I like the idea from WH of having comparison functions return only true or false, given that we have `compare` for the tristate. We can iterate on this detail during Stage 2 though. + +JMN: Okay sounds good, this is something that I don’t have a strongly held position. + +WH: I was surprised when I saw this — I thought these were just returning booleans. I’d prefer for these to just return booleans. + +CDA: SYG? + +SYG: I have a clarifying question about the earlier slides and the first issue that you screenshot with something with the word mongoose in it, and not supporting a particular JS library. And how would having a built-in decimal help this kind of issue, where a library is not supporting decimal correctly? + +JMN: I think this might be able to be a solution by simply passing the data around instead of converting somewhere in the middle of a pipeline somewhere. + +SYG: It is converting because it is aware that thing should not be converted, so why would the existence or presence of decimal library do that? + +JMN: The library would have to work with the decimals as far as possible and not cast them or convert them. + +SYG: Right but that does not seem to be an argument for or against built in decimals. That is is a bug with this library. + +JMN: Um yeah, you may correct with that. But then the question might be then, what other kinds of bugs like that are out there with this kind of libraries, where some conversion is happening unintentionally where something is no longer a decimal, Even though originally it was. + +CDA: We have somebody in the queue. + +NRO: When we added new built-in types to the language, and so libraries that work with engineers and this is all adopted to supporting the types, and so like libraries would start supporting this type and they can choose decimal JS but when if someone is different in the library and having one in the language and I don’t understand each other and libraries can just using a type without support of this because it does not work with the those decimals. + +DE: I am in the queue next. I agree with NRO that this is about solving underlying problem, which is they want to deal with decimals and this decimal.js is an implementation detail of the solution. So I think there would be less cases of people saying “okay I will convert this to a number as the intermediate between two things” which is completely broken if we have this decimal that is a language that everyone can speak. + +SYG: To be honest, I understand the thought process, I find it fairly weak in that it depends on a bunch of actors all seeing a built-in decimal coming into existence and doing the right thing to do in each use case. + +DE: Coordinating many actors is the point of standards. So I understand the pessimism, but that is where would get value. + +CDA: All right, we are almost at the time allotted for this topic. We do have some time available this afternoon, so I think we can continue to discuss. Next is JHD. + +JHD: You show the download display and count and you made a comment that they are all the same author? Like it is the same person who has made all four of these, these different flavors of ways to make decimals? + +JMN: Yes. + +JHD: That is fine but it does make coordinating with multiple actors simpler but with decimal in the language or without. And my question is why can’t this one person just add a simple protocol or a string protocol even to all four of these libraries so they can trivially losslessly—or ideally losslessly converted between each other and all of those. Then you can pass any of the four objects to any of the four library and capital J and capital W can work and this can be done unrelated to this proposal, and probably they we would wants to do this any ways unless this proposal advance and all four of would make a change and all API’s—this guy can make a 5th library and have all these be thin wrappers around that object. There is a lot of current solutions that would address the paper problems and the bugs you described. In a world where there is like 50 different authors creating libraries, that is much harder and in userland. And DE is right and that is primary point of standards is to appoint actors but that is not only mechanism to do that and given this is one actor and coordination can happen inside their own brain, why are standards necessary here to do that when this person has not even seen the value of doing that themselves? + +JMN: Um, I wonder if to what extent they follow the author of these things is following that, whether there is any pressure for that. And it is possible that there’s enough in the community that don’t know the possibilities of what they are saying and maybe he has tried it. I don’t know if it comes to that. And the author here—he has probably thought of things like that and I actually how far he can go with it because the underlying details of these things I would say are different enough that simply switching out one for another and expecting a robust API to work out, I would have to think about that. My gut reaction is that might be a bit implausible. + +JHD: You are right, your explanation is plausible that it might be implausible, and very might be and but in this case how does standard decimal help other than killing these four libraries and having everyone to switch all four of these to the standard one? + +DE: I am on the queue, I think the idea is yeah that this switch those libraries and switch to the standard one and I think the difference between these when JMN was looking at this, we would not find any kind of correlation of what the application needed and what library they chose, and the differences in customizability was not useful for peel that you might have thought. And the idea is standardize on one thing and I I don’t know how we would use a protocol and what that would accomplish. Because we do not want to bundle in multiple decimal libraries was that you are talking about a runtime thing that would happen later. And then the same point as JMN made earlier that they have different semantics and different aPI’s so they are not replaced within each other. + +JHD: Then how is the built in decimal a replacement for these libraries and API’s. + +DE: They would do a transition and you need to invest in transition in standards and you know they are long lived and well maintained this is just a super common thing at TC39 and some things we add things to the standard library because they are useful broadly. It is just so similar to temporal. + +JHD: So with decimal in the standard, all users of these four—well, the bulk of them would migrate to the standard and drop the library entirely. + +DE: I am not claiming that directly but that is way software goes this and there is a lot of software uses these and I don’t think most of them would use the standard to update their code. + +JHD: And what was the rationale of the author to create four different approaches? Like how does this proposal address all of the differing needs or constraints or whatever that drove the creation of these four different APIs that are incompatible in terms of semantics. + +JMN: I cannot speak for the author, and so the different semantics and I do not wanted to put words in his mouth. + +CDA: I posted an answer in the delegates chat and describes big number JS and decimal JS, and does that include decimal JS—hopefully that will answer that. https://github.com/MikeMcl/big.js/issues/45#issuecomment-104211175 + +JHD: Is Mike involved in this proposal, given that he is the preeminent expert on decimals? + +JMN: We have had chats with him but not formally involved in the proposal. + +CDA: NRO? + +NRO: So I am saying based on the numbers of how big they are and how many things they support. Like there is no JS light but if you don’t have time to comments but there is different subset and given that you would have a small risk library for the int that are different and subsupporting that. + +CDA: MM? + +MM: So the point that you made about the door open and can you go to that slide? and seems the opposite. If it was conceivable with any reasonable probability that we would eventually have decimal primitives, I would strongly object to doing anything else. And as I understand the reason this is on the table is because the browser implementers have definitely vetoed the decimal primitive. And say we accepted this and for whatever reason the primitive is no longer off the table and these have been answered and I think first this takes the winds out of doing anything else with decimal as it should. But if we additionally later had decimal primitive in the language, we would curse the day that we accepted this one into the language because then they would coexist and would coexist uneasily. And one example we are allowing these things to be denormal, the primitive representation of a thousand and never allow that for primitive and another example is that primitive landscape and there is a number of primitives in the wrapper and the with wrapper of the primitive objects and neither answer is good. And I think we should only consider this if A, we think primitives really are permanently off the table, and B we accept going forward with this one precludes a world where we ever admit decimals as primitives even if we are otherwise able to. + +DE: Can I jump in and answer that? I am on the queue. So, from mechanical sense, the idea of keeping the door open is something we have discussed before on committee. Decimal objects could be reconceived as the primitive wrappers for a future primitive. First, we do believe that this is 100% off the table and we have heard that from implementers. And from that discussion and explanations and we have asked if they change their mind, could we still do it? I think the answer is yes from that kind of primitive wrapper perspective. Then there is a semantic point, if it is a primitive do we need to distinguish different quantums? If it is a primitive should it just be the cohort set? And I don’t know. That is kind of a big question. But ultimately, because we are deciding that we do want to include this in the data model, to include quantum in the data model, and that is a blocker to be a primitive and that will remain an argument forever that we should not have this as a primitive. And I think this is all consistent and we have heard strong argument in committee why it is important to keep the quantum in the data model, and those apply even if browsers change their mind about whether they would admit new primitives. + +MM: I suppose we can do this offline, if it’s conceivable that the browser makers could eventually change their mind on this. + +DE: I think it’s not particularly conceivable but I mean, of course it’s conceivable. Everything is conceivable. We’re going based on the evidence that we have. + +MM: Okay. I will just say my overall take is I’m not going to block this but I am reluctant. I would prefer not to see this added to the language. But I’m not going to block it. + +DE: So, sorry, can you clarify what you thought of my point that we had reasons for why we wanted to include quantum in the data model and if those are valid, and if that blocks it from being a primitive and this explained as a wrapper, then that would apply either way. Do you see the logical chain I’m making? + +MM: I think there’s a step that I’m missing in there. If it’s a primitive, I would only accept it as a primitive if it were canonical—normalized, rather. If we thought we wanted to leave the door open, then I would object to the non-normalized representations here and insist that everything be normalized here as well. And if the door open is completely not an issue, then I would still actually prefer, as you know, normalization, but I won’t insist on it if we believe this is just object state forever. + +DE: So I think the proposal is just object state. And I think the assumption with this kind of slide has always been in the, you know, in this counter-reality, would this work? And we have the reasons for including the quantum for not just—for not canonicalize and not normalize. And so, yeah, I think in the interpretation that you’re taking it, I would characterize it as the door is closed. + +MM: Okay. So I’m not blocking. I will yield. + +WH: A bigger concern with making this into a primitive later is object identity where identical primitives are distinguishable. The current Decimal arithmetic etc. methods, always produce new objects. + +DE: But the idea is these would be primitive wrappers, not that the object would be represented as the primitives and there would be a separate way to get at the decimal primitives and if you get two objects you get one of these as identity. + +WH: Okay. + +DE: Does that address your thought? + +WH: I don’t want to digress on this. Let’s go through the queue. + +SYG: I have a question about one of the motivations that I have heard many times is about kind of interchange with other parts of a complex software system that already supports decimals. Database usually seems to be the one that is usually brought up. Given the relative lack of built-in decimal types in other languages, I’m trying to understand why that motivation would be helped by our choosing Decimal128 with whatever set of things that we chose. Like, why would it solve coordination for interchange? + +JMN: I wonder if the premise is true. I mean, many languages do have decimals out there. I think especially just focusing on the database example– + +SYG: I didn’t say no languages had decimal. I said many languages do not have decimals. That is true. + +JMN: That’s quite right. I mean, thinking about databases in particular, we know that interacting with SQL databases there is going to be in many cases heavy decimal usage certainly. + +SYG: Sorry, but SQL—maybe WH has an answer that would explain because he’s on the queue as a reply. + +WH: If we’re going to do decimals in interchange, IEEE decimal is the thing to use, and that’s what everything that’s not older than the IEEE 754 spec seems to be converging on. + +SYG: But like SQL would or does use something that is basically compatible with decimal 128 today? I’m asking. I don’t know. + +WH: SQL is far older than the IEEE standard. + +SYG: Okay. So decimal128 in JS, how does it help bridge like SQL drivers for database that are widely used? + +WH: Decimal128 can represent pretty much anything you can do in SQL. + +SYG: It subsumes the format and the older more ad hoc formats, let’s say? + +WH: Yeah. + +SYG: Okay, I see. Thank you. + +DE: So SQL itself doesn’t include the limit to the length of the decimal, different databases do have different lengths. But some actually do have slightly longer than 34 but when we have gotten in touch like the Oracle database authors who had this issue, they somehow in practice had shorter limits. This was considered not a problem, that 34 digits was enough. So this was sort of a thing that we spent some time looking into on this decimal128 versus BigDecimal question. Separately, I don’t think that the lack of a feature being in another language is this really strong piece of evidence in general for us in TC39. I mean, we could apply it to a lot of things. We’re kind of about identifying what would be useful. If we restricted ourselves to the intersection of what is in all languages, that would be quite limiting. + +SYG: Right. I think you’re taking a more general version of my argument and I tried to be more careful in phrasing it to be about not over languages of being sufficient counter argument against decimal but that coordination of complex systems that are complex software systems that are written in different languages was one of the motivations for this. And primarily I’m thinking of C++ that does not yet have a builtin decimal type, given that many pieces of software in deploy systems are written in C++ given that the C++ software have to choose a userland library to do its decimal computation, does our choice have that problem as the primary motivation? And WH’s answer is good enough for me now in that Decimal128 seems to subsume the other choices and if that is the case then decimal128 would solve that. + +DE: Okay. If it’s about subsuming possible choices I want to know about C++ in particular, it has been under consideration for a long time in WG21 to add decimal and C++ to decimal somehow. If anyone wants to work on that, it’s probably a relatively good starter project for someone in WG21 because most of the questions are straightforward and people aren’t working on it. Anyway, in Bloomberg we have the standard library called BDE that is open source, you can just download it. And it includes a decimal. It’s important for us internally to standardize on decimal usage. It might be that some of the kinds of code that are open source and in C++ don’t end up using decimal. So that might be one source of the coordination thing. But anyway, this need is identified across languages, including in the C++ community. + +SYG: What is your take away that the C++ body has not yet added a decimal after being in consideration for many years? + +DE: It’s kind of like how we haven’t adopted AsyncContext despite it being useful, it just takes time and takes people being dedicated to going through the process. It just happens that language features are missing and they get added over time. And, you know, the particular person has certain other priorities and it’s hard to transfer this, you know, kind of normal project stuff. + +KKL: Provided that we get through the existential issues with decimal and into the bikeshed, I wanted to point out that decimal will establish a precedent in the language standard for the naming of comparison operators or method names for comparison operators. And I have a strong preference for spelling out all of compared over shortening to cmp. And based off the fact that JavaScript is already a PERL(?) dialect we could borrow the short names from pearl along with regular expressions in the deck methods. + +JMN: Just to chip in. I agree. The cmp thing is actually more like a hint for the reader. The real name as it stands today in the spec is `compare`. + +KKL: Thank you. + +JMN: Everything else is spelled out even if it’s a long thing. + +EAO: Looking at the string formatting methods that are proposed for decimal, the readme mentions that toExponential, toFixed, and toPrecision are similar to the corresponding Number methods. But at least in the current spec text, in particular with respect to the arguments that are passed to these methods, the Number methods currently each take a single number integer as its argument. But the spec text for the proposal has each of these apparently accepting an object argument, and requiring very strictly that it must contain a specific property there. And then also, for example, toExponential doesn’t seem to even support the number of fraction digits input that `Number.toExponential` has. When considering what is being proposed here for these methods, should one be looking at the statement in the readme that says they should look like what is for Number or the text that’s in the spec which is saying something quite different and which probably ought to get fixed at least in some ways? + +JMN: Yeah, it sounds like you found a bug with toExponential, but the potential is saying they are similar and decimals are like numbers and have the numbers that are available and give you the result that has different formats. The idea was to have some kind of options bug there as the argument but I think this is something that we could hash out in later stages. I think I myself waffled back and forth as to whether we have enumerated parameters or an options bag. TBD. Either way is fine with me I would say. Any suggestions are welcome. + +EAO: What I would suggest at the very least is, what works for Number should also work for Decimal. And it might also accept other options instead of numbers as the argument. But at least what works for Number should work for Decimal, otherwise it would be very surprising. + +JMN: That’s a great suggestion. Thank you. + +WH: I hope this is a bug because the spec is currently incoherent, where the signatures of some of the methods were changed to take an options bag but the algorithms still take numeric parameters. So I fully agree with EAO that this should take a Number as the first parameter. I think the API should be the same as what it is for *Decimal128.round* where the first parameter is a Number and the second parameter would be an optional rounding mode. + +JMN: Sounds good. + +WH: I will note that the presentation slides will not work if these take an options bag. + +JMN: That reflects my own going back and forth on this topic. + +DE: So on the queue, thanks for pointing out bugs. I’m happy to see that we’re getting to smaller and smaller bugs, though, I think each of your rounds of review, WH, has been really helpful and seems like we all are thinking in similar directions here. So I think this kind of fix in iteration can happen in Stage 2, before Stage 2.7. + +JMN: Anything else? + +CDA: There is JHD. + +JHD: I was going to go after you. But, yeah, I mean, so the objections I stated in the last plenary have not been resolved. There was no conversations between now and then. I reached out to JMN when I saw this on the agenda and we chatted. I’ve been mildly reassured that the specifics of the API aren’t as important, meaning we don’t actually need the exact proposal’s aPI to be vetted in userland first. And some of the comments today as well about the four different libraries with different APIs, it seems like it largely doesn’t matter which one it is to most of the consumers. They just kind of grab one and it works. So that suggests that they would grab this one and it would work. But it still doesn’t feel like me without primitives it carries its weight as it’s important as numbering system. I have all of MM’s concerns that if it’s going to close the door on doing it as primitives we absolutely should not put in the language. It’s not clear to me if the browsers will change their minds ever about adding new primitives, I don’t think that—if that were to happen, it would be an immeasurable tragedy to just basically be unable to have a numbering system be a primitive in the future. And we just heard some reasons why without the canonicalization it wouldn’t be possible to have primitives without constraints. That sounds like a cross concern. I don’t think it’s appropriate to advance this to Stage 2 yet. That remains the case. + +DE: So you said it doesn’t carry its weight. But the presentation gave a lot of reasons for why it would be useful. So can you go into some more detail on what you’re looking for? + +JHD: Yeah, I mean, obviously this is a subjective concern that can never be completely quantified. But I have a little bit of experience in languages where decimals are a class and not a primitive, like Java and Ruby, and it is very awkward and unergonomic to remember to do that. Bugs abound when people just think you can type a number and it will do the right thing and you have to employ lots of tooling to tell people to do the uglier ickier thing in order to get the right math. It seems very important to me in JavaScript that it be very easy to convince people to use the better thing. So, for example, with BigInts if there’s a use case where integers matter and there are big integers that matter, it’s not difficult to convince people to slap the N on the end of the numbers. That’s a very low friction change to get like correct math with integers. And that is what I envision JavaScript having at some point in the future, that you slap an N on it or whatever it is and all of a sudden the numbers are intuitive and match what you learned in school instead of the floating point nonsense that many languages have and we all know and love. I know there’s reasons for that, I’m not saying we shouldn’t have floating point. But the majority of the complaints about numbers are due to the floating point stuff. So I want to ensure that we are still heading towards a world potentially where that’s the case, where intuitive math is easy to code in JavaScript. And it’s like obviously you can use linting to force people to use decimals whether they’re primitives or not. There’s tools. But it’s a harder ask when there’s a lot more boilerplate involved. And then, you know, things like syntax highlighting and stuff, right, MAP(?) and regexp and date are all syntax highlighted the same because they’re just objects, it’s primitives and keywords and stuff that get distinct highlighting. There’s just a lot of aspects and facets that I will not exhaustively enumerate that are impacted by it not being a primitive and something as core essential, I don’t know what the right synonym is, as a numbering system really needs to be a primitive. And the arguments that I heard to try to override that are effectively coordination point which, you know, I’m not super convinced about, considering there’s one author in the ecosystem solving this problem and that is the coordination point. And the other one is, you know, there was one today about the incentive of changing your code for a standard built-in thing. While that’s totally true, tons of people have migrated away from moment even with Temporal on the horizon and they have to change again when Temporal ships. I also seen it doesn’t have to be in the standard to incentivize things to change especially when the majority of some thing’s dependencies are transitive. In other words, a lot of people select eSLint / Babel / React / whatever, but it sounds like for these decimal libraries, most people are selecting something else that in turn selected one of the decimal libraries. In that case, the bar for incentive to change your code is much, much, much lower. So that’s what I expect here whether decimals in the language or not. Yeah, like I said, this is subjective. I don’t know if I will be able to quantify it and give you a rubric where it can be like check the right number of boxes and I’m satisfied. I’m trying to in good faith explain some of my thinking on it. + +PFC: So when it became clear that this proposal was not going to include a primitive, I found that pretty disappointing as well. But I think JMN and DE have explained and shown how the current state of decimal objects doesn’t close the door on decimal primitives any more than that door is already closed by other factors not having to do with the decimal In other words, I don’t think it makes the situation worse. And, I think if we want to move the needle on those other concerns that are preventing engines from adding new primitives, I think having a decimal object that then sees uptake is probably the most likely way forward that’s ever going to budge those concerns. So I kind of think, if the goal is to have decimal as a primitive eventually, it’s counterproductive to not entertain the thought of an object first, because I think that’s the only way we’ll get there. + +CDA: We’re right about out of time. And I do want to read—I was waiting to read the statement from SFC until and unless JMN was calling for Stage 2. But I will read it now. SFC says I am happy with the champion's support for retaining trailing zeroes in the data model, which solves the long-standing i18n bug involving inconsistency between `Intl.PluralRules` and Intl.NumberFormat. I look forward to continuing conversations on exactly where we draw the line between full precision and normalization in the various operations. I explicitly support this advancing to Stage 2 from an Intl point of view.” Okay, with that, we are out of time. JMN, how would you like to proceed? We do technically have 15 minutes that would be available after lunch. But we need to move on to the next topic, now or in the next minute or so. + +JMN: I understand it. It sounds like there was a block. So I don’t want to challenge that block. Did I get that right? Is that correct? + +JHD: That is correct. + +DE: Can we continue discussing this in the 15 minutes after lunch? I think that would be useful. I don’t understand some of the points of the block. And that’s important for us to get the reason for the block on the record. + +CDA: So I think that’s fair. I’ll note that I don’t know that JHD’s position has changed since last time. His objection back in June. + +DE: Sure. But we didn’t really hear the reason in much detail then either. I would really like to continue the discussion after lunch if we have those 15 minutes. + +JHD: I also remain available between plenaries for the champions to reach out and hear more of those reasons, which hadn’t happened yet. + +JMN: That’s my mistake. Sorry about that. I was focused mainly on working with WH and SFC to get some of the other spec details correct. + +CDA: Okay. Would you like to dictate a summary at this point for the notes or prefer to wait until continuation? + +JMN: I think I prefer to wait until continuation. I think we might get some more material there. + +CDA: Okay. I have captured the queue. + +## Avoid capturing lexical context in indirect eval + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/ecma262/pull/3374) +- [slides](https://docs.google.com/presentation/d/1Xko1Md81wXpUFvgH_nQVl0-DW9hyqlOTkZX3y7pImvg/edit) + +CDA: NRO, are you there and ready for capturing— + +NRO: Yes, screen share. + +NRO: You can see my screen? Good. So this discussion about avoiding capturing lexical context in indirect eval it is about normative request in the repository. Before I actually get into the topic, let me make it clear that we’re only talking about indirect eval. We have these two types in the language. One is the direct one that can capture local scope, local variables, that’s only when you explicitly use with the eval() syntax right there. If you do anything else and if you use optional call or assign eval variable or eval through any expression that’s not just directly put eval there, then we have indirect eval. That doesn’t capture any lexical scope, it just a normal function. You can implement this in JavaScript by loading a parser, parsing the code and interpreting the AST in the context of the global scope. These examples on screen here are all examples of indirect eval. Also new Function is one of them and all the new function constructors. So if indirect eval is a normal function, how do functions behave? You can have a call from other realm and that is the same as the function and we can import from other file and still works the same. (inaudible) it works the same and call in the same function with the same parameter. The behavior of the function is not affected by this when it comes to normal functions. This is only true for things defined in Ecma-262, if we look at stack traces we can see the call function is traced, or we have things like `location.href` and object are affected by their caller to know what URL to use to resolve the URL you’re setting it to. So moving the call somewhere else might have—this is defined in HTML and not in 262. + +NRO: And HTML has some wiring through eval and new Function in indirect eval to make sure that the special cases keep working. So, for example, HTML here if we have some iframe and we set `location.href` for the iframe but the other document where this is appearing. And this is also for example for tracking promise rejections and in this case because of this promise rejection in the second line will be the frame itself but the document in this code. This has to work through indirect eval. There is wiring there to keep track of who is the cause of this. And this was originally was just in HTML but since 2015 we have the wiring written down in 262 since technically it was not possible for HTML to get the context with promises. So since then, indirect eval or new Function will capture the script to run to pass it forward. That was in 2015. After a while, we added import to the language where dynamic import is the same text itself function. And it will resolve the specific that is important to the module of the script and it is starting from module A and starting from module B. This is like module A, module B are called from the referrer of the import. So when import was active and wait to check in which module does this dynamic import appear? We had machine for that back? It was get in the current module and what we introduced for HTML. However, this pierces through indirect eval, which means even when we have indirect eval that does minimal function should not capture any scope, it is capturing something for the dynamic parts. In this example here, we have like some folders some other file. If we have dynamic file (inaudible) it result in the current file. If we have indirect eval in this case I’m doing direct eval by sounding to variable F and then calling it, the first call would resolve to the current file because that’s where the call is happening. But then this is the case where eval stops being just a normal function. So instead of F we use (inaudible) from some other module and function to the call or arguments to it. The actual code is happening inside other folders/F and the dynamic folder for that. And this is to be clear for direct eval, these would be expected because direct eval should capture it to where it happens. Indirect eval was meant to be the function. It is a normal function except for the very specific case of having import pass through it. So the change proposed by the request is these two calls should have the same behavior. Indirect eval is the normal function and two different ways of calling the function should result in the same outcome. And the way to get to these is dynamic imports calls them inside indirect eval or new function would have no script. This is not new. This already happens. In some cases, for example, when HTML is running JavaScript code not from inside of the script and do dynamic import inside HTML in line with the handler will go back to having the referrer because there is no script or module. We have a constraint here that is we cannot break HTML. There are programs out there that rely on eval trading this behavior too. Like, treating the same eval codes which means that we can only change it for the specific dynamic import and not what HTML is doing. And is this all? Is this really the only case? There are multiple cases where get active script or module and get what the current module is called. One of them is import.meta. That’s not a problem because it is syntax error in eval/function. And other is host enqueue promise job or host make job callback and this is the module. And we have eval function that have to forward this for HTML. So the change here would just be to stop using this get active script or module for dynamic and instead just doing something else that doesn’t go to eval. So I would like to ask if there is consensus for the change. I didn’t include the slide on web reality. But the web reality here is that as far as I could tell, I have a repository I can share, Chrome and Firefox implement the ECMA 262 behavior and Safari does something different. I believe Safari fall back to the realm in some cases but not in all the cases and I could not figure out what fall back in some other cases. But it is not the script, active script on module. So there is like no consensus between implementations on what is happening here. The hope is this change is easily web compatible. Also because passing dynamic import is probably not that common. And so I will go cC1 through the queue now. I couldn’t figure out how to share. + +USA: There’s nothing on the queue at the moment either way. Let’s give it a bit maybe. First, we have SYG on the queue. + +SYG: I’m just kind of confused. This is probably just some clarifying questions first. What is the motivation for doing this? Did you run into bugs with the surprising behavior with different relative module path? + +NRO: The motivation here is that lexical scope, dynamic scoping can—it’s not the bug that I run to myself. Dynamic scoping can cause confusion because refactoring and break and moving code around is safe except for the things that are like lexically captured like viables and the breaks were refactoring properties and real world case and the only dynamic scoping we say we have is in direct eval but then we found this case in which we have dynamic scoping in something else. + +SYG: So I think maybe it’s a matter of perspective. The thing that you’re asking for consensus on is one slide where you pass indirect eval to a call UTL. + +NRO: This or earlier? + +SYG: That one is good. So the current behavior is that the foo.JS relative path would be different absolute paths because the call comes from a different iframe? + +NRO: This is just all within the same iframe. The code just happens from different modules. + +SYG: The code happens from different— + +NRO: So what is happening here—let me go back to the other example. This example here, the last two lines in this example (slide 8). So this is all happening within the same realm, within the iframe or document. But the two codes F are happening in different modules. Specifically the one on the second last slide is happening in this module on the screen and the code in the last line is actually happening in other foo.js. + +SYG: And in the indirect eval example— + +NRO: This is all indirect eval. + +SYG: What is the two different paths that get resolved in the indirect eval example and pass eval to the call utility? + +NRO: So in this last two lines, the resolved paths are the two shown in the comment. + +SYG: Not this example. The proposed change slide where you do let F equal eval and F of import. + +NRO: This example. One at the top? + +SYG: My Webex is not updating the slides. It’s still saying and showing your slide is this all? I don’t know if it’s me. + +CDA: It’s on your end, SYG. + +SYG: I’m on slide 9, that’s the intended slide? + +NRO: Yes, slide 9. Did you see the slides 7 one second ago? + +SYG: Yes, I saw slide 7. + +NRO: Here in this example (slide 9), this is actually the same code I had there. I’ve just extracted the only two lines where behavior would change. So right now in this example screen at the top, like, without the proposed change, these just this example is not enough to tell what will happen because from this example we don’t know where the function is defined. But today the second dynamic import is resolved relative to wherever the call function is defined. + +SYG: Okay. Today this will be relative to where ever the call function—the call utility function is defined? + +NRO: Yes. + +SYG: Why is that not dynamic scoping? + +NRO: That is dynamic scoping. And the proposal is to get rid of it. + +SYG: But indirect eval is like dynamic scoping all the way down. Why do we want to get rid of it cC1 for indirect eval? + +NRO: Because indirect eval does not have dynamic scoping at all except for this case. It’s direct eval that does the dynamic scoping things. + +SYG: Indirect eval is dynamic scoping in the sense that it’s at the global scope and you can have whatever on the global scope. You don’t know what global scope is running in. It’s the same as direct eval except there’s no scopes in between it and the global scope? + +NRO: Like, each function has a pointer to whatever realm it’s running in. We use indirect eval is reading the pointer and not reading the pointer from—it’s not reading from where it’s scoped. If you grab eval from the different time frame and you call that eval in your own iframe, it will still use the global from its original iframe and not where the iframe where you’re calling it. + +SYG: That clears it up for me. Thank you. In this case, I think I support this change but remind me again on what needs to change in implementations. + +NRO: In implementations, there will probably be some bug in the execution context stuck saying these stuck frame has been introduced by eval and ignore it. Stop here when trying to look for the active script module. + +SYG: Okay. I am unfamiliar with that part of the code. So I don’t want to promise ease of implementability or anything. I would to hear if other browser vendors have thought about this. + +USA: All right. We have a clarifying question by MM. + +MM: I just want to clarify, first of all, agree with NRO. indirect is completely lexical. Direct eval should not be thought of as diem in aic scoping but should be thought of special form same way as if then else is a special form and evaluates in the scope of the code it appears in that is a static thing. It’s not something being called. That’s it. + +USA: Moving on with the queue, we have support from MM and next + +KKL: and this change anticipates module constructor and behaviors in that context. My hope is that an eval can be confined to a particular realm and with this particular leak of dynamic scope, it allows code that is being run with the eval from a particular realm to sense information specifically the active script to records module referrer specifier base HTM is sensed to be outside of its sand box. So this is important for a native implementation of confinement in the same realm. + +USA: All right, thank you. Next we have KM + +KM: I guess SYG asked for feedback. I’m saying I don’t know. I haven’t looked at any of this in a while. I’m not sure. I guess my proposal if it was valid if we have consensus to do it, then do something what we do for Stage 2 and wait for two implementations to do it before merging it into the spec. + +USA: Next we have DLM. + +DLM: I’m responding to SYG’s question as well and not sure if this is difficult to implement or not and unsure whether—it would be a case or could see web compat problems with this. I don’t think we would be the first to implement this. I guess I would be more comfortable if I heard one of the other browsers say they were interested in running this as like a little experiment or something to see if we can actually encounter problems. + +NRO: Regarding web compat is different behavior for browsers and hopefully that can work out. Given that the browser express implementability concerns, maybe I would change my consensus call to what KM suggested that was let’s consider these as a Stage 3 proposal. I guess Stage 2.7 proposal given that—and then come back next time. And then as soon as implemented with implementations we know it’s actually implementable, I will come back. + +SYG: To clarify, 2.7 in this case would probably mean not just Test262 but also WPT? + +NRO: Probably. I would have to check with the test with these or not. + +SYG: My primary concern given this part of what I asked the motivation earlier is that this is—I agree with DLM was saying. This seems corner casey to me that means it’s likely to be deprioritized. After all the years we don’t have interop on the incumbent settings object that really weird thing and there’s a lot of corner cases there. As far as I can tell the demand for the interop for that part to have interop is not high and likely to remain interoperable and it feels like that and I don’t want to give false promises if we give consensus that will get done sooner than later. + +NRO: Yeah, I don’t have an expectation here. I will answer your original question with the WPT tests given how it is the edge—very much depends on what the embedder is doing. + +USA: Then that was it for the queue Nicolo. Would you like to ask for— + +NRO: I guess now I’m asking for consensus for making this into a Stage 2.7 proposal. + +USA: Let’s give it a minute. On the queue, we can see support by DLM for Stage 2.7 proposal. Also support by—you have it on your screen right there. By MM and SYG. And perhaps let’s wait a bit to make sure nobody is typing out a really long objection. That’s it, then. Congratulations. + +NRO: Thanks. Just to clarify what this means for the discussion with the third, it means that that other discussion—like, that other normative change is not actually observable from just within 262. + +USA: Okay. + +NRO: That’s it. + +USA: Also could you dictate a summary of key points and a conclusion. + +### Speaker's Summary of Key Points + +NRO: Yes. So summary: We have gone through a corner case for indirect eval containing dynamic import in which it captures the module resolution referrer from the module that is calling the indirect eval. We discussed ant how that is dynamic scoping that is like a refactoring hazard and how we can change that and instead use the same resolution referrer regardless of where the call is happening. There have been some concerns about implementation complexity and about how these are not a high priority for browsers given that we already have divergent behavior and there has not been any ask for converging. So this change has been converted from the normative pull request to the Stage 2.7 proposal. + +### Conclusion + +NRO: And the conclusion is that we got consensus for having it as a Stage 2.7 proposal. I just have a question. Do I have to create a repository or is it fine to just keep the request and nothing else? With its now proposal? + +USA: That’s a great question. I guess that— + +NRO: Let me rephrase. Would anybody be opposed to me just keeping this as a request? + +MF: I would prefer a proposal repo for this. If it’s important enough to be made into a proposal, we should have a dedicated place for organizing discussion on it. We expect more than a linear pull request discussion. + +NRO: I see SYG under the queue. I will make a proper repository. + +SYG: When I was asked to do something that was normative PR, I made a proposal but didn’t, for example, set up a new draft spec thing and just linked to the already made PR as the canonical proposed spec change. + +NRO: Thank you. I will do the same. + +USA: Thank you for everyone. To everyone for the discussion. There’s nothing we can still squeeze in. So let’s take two more minutes for lunch and see you all in an hour and two minutes. + +## Continuation: Propagate active ScriptOrModule with JobCallback Record + +Presenter: Chengzhong Wu (CZW) + +- [proposal](https://github.com/tc39/ecma262/pull/3195) +- [slides](https://docs.google.com/presentation/d/1FQNSpCdzkvcRg-yFBjUOqjrNAVHezfoVLjgK4cfEjIc/edit#slide=id.p) + +CZW: Since we already got a consensus with encoder proposal and this change would not be observable with ECMA 262 since it was captured the module and so this is purely about the requirement that put host and HTML already didn’t match with this requirement and since they captured the active script or module HostMakeJobCallback. And that’s the change of the proposed change and you can see that browsers are not aligning in behavior in the reality and I would like to ask for consensus to change the host requirement to match what HTML spec says to not review the promise state by move this HostMakeJobCallback instead. We can go to the queue. + +SYG: Clarifying row 1 that says HTML is the proposed behavior of this normative change. + +CZW: Yeah, exactly. + +CDA: Nothing else on the queue. + +DLM: I’m just wondering if we should consider this as a Stage 2.7 proposal like we did for the eval topic? Just in that I’m speaking for myself, I’m not entirely sure about the implementation of this one. I don’t know. If other implementers are more confident about this, i’m okay this is normative change. If there is some uncertainty, I’m wondering if it is okay to take the same route we took with the eval. + +SYG: Yeah, I agree with that for the same reasons basically. I just don’t know this corner well enough to really give a confident—on paper, the semantics seem more reasonable. But all things considered, yeah, for the same reasons and practically speaking, likely the only way that I or my team will remember to implement this fix is once we get some new test failures. So having tests as part of the pipeline here will be good. + +CZW: Yeah, change that and ask for consensus for Stage 2.7? + +MM: I have a clarifying question with regard to the discussion that we just had. You say the same approach we took to eval, are we talking about the semantics or are we talking about the process? + +SYG: The procedural one where we convert this into a proposal, grant consensus for 2.7, do not merge it and then wait for Test262 and WPT tests. + +MM: Okay. And the thing we’re thinking of going to 2.7 with is the first row on this table? + +CZW: Yeah. + +MM: Okay. So I don’t feel oriented enough to have a strong opinion. But 2.7 seems okay enough to me in the sense that we can revisit this if there’s a semantic problem as well as if there’s an implementation problem. + +SYG: Well, MM, 2.7 I don’t think we should—we should not give 2.7 if we think there may be open semantics issues. 2.7 is we have discussed and agreed on the design but it is not yet ready for implementation until tests are in and once tests are in it's a pro forma kind of thing to then advance to 3. + +MM: Okay. I would like to hold back on 2.7. I’ve discussed this also internally at Agoric and none of us feel really well implemented in the implications of this. I would like to postpone 2.7 probably until next meeting when we can better understand this. It certainly conceptually interacts with the semantics of eval that we just resolved well. So we also want to think about in that context. And I’m sorry that I did not approach the meeting with a good enough understanding to make a decision. + +NRO: Yes, regarding type of tests, like I don’t know if there are tests in this but this is already behavior described HTML so adding tests in WPT is probably related—we should do it regardless of this proposal given those are tests sample that are already specified. When it comes to 262, there’s just not possible to test in Test262. + +SYG: If it’s not possible to—sorry to interrupt. + +CZW: This is a host requirement so it’s purely tests. + +NRO: So this is host requirement so the main host is already violating. + +CZW: Yeah. + +NRO: To answer SYG, it’s technically possible to test in 262 but we have to require like the test’s hardness or to expose some method to check some function to check what the active script for module is because we cannot just rely on those provided by HTML because HTML does not respect our requirements in a way. So that’s what I mean by not possible to test in test262. + +CDA: I think JRL had a similar comment. + +JRL: NRO just answered it. It’s not possible to test in 262. This is like a W3C test or ECG test and not something that we would do. + +DE: So seems like we all agree on the kind of testing that is needed. Those seem like good things to ask for before Stage 3 for the new 2.7 split. I wanted to suggest splitting this into separate proposal we don’t go through the extra editorial work of making a new repo and kind of formatting it differently. What if we just mark the PR as stages? So sometimes a PR is ready to go immediately, but I think at this point if we took PRs through the stage process and just when we get consensus on the PR, we’re getting consensus on like it being at stage 2.7 or it is at Stage 4. That could be useful. Of course, we can always jump ahead and just put something at Stage 4 and merge it when appropriate. + +DLM: The burden of having to make new repo is not actually that high that we should worry about making the change to process here? + +CDA: Yeah, so I know that’s more addressed to DE. But given that we have five minutes here, I’m not sure we’re going to necessarily come to an— + +DLM: I don’t need an answer to that editorial. + +CDA: Sure. And, yeah, not my intent to, you know, prevent the discussion on it, just we’re short on time. + +KKL: Apologies for the question I would probably answer myself with sufficient research, but does this have potential intersection semantics with AsyncContext for us to prepare our reasoning? + +CZW: This change is not—it has some overlapping with host hooks for AsyncContext. AsyncContext doesn’t depend on this change and the change put in by this PR can allow—well, the two proposals can rely on the same host hooks but they are not necessarily entangled. + +KKL: I take that—I interpret that to mean they are potentially entangled under certain host conditions. + +JRL: They modify the same host hook. The proposed changes here match exactly what AsyncContext current spec text would do. It would behave the same. Whatever is currently happening with chrome and Safari right now with these semantics is different than what asyncContext would do. + +KKL: Okay. So AsyncContext would behave consistent with HTML as proposed in row 1? + +JRL: Yes. + +KKL: All right, thank you. + +SYG: Clarifying if we can’t test this in Test262, row 2 means the behavior if you could observe get current script or context, is that what that is showing? + +CZW: Row 2 means if the host implements ECMA 262 requirements correctly, it should be observed as row 2. + +SYG: Oh, I see. If the normative requirements on the host hook was actually followed by some hypothetical host, that’s what it was showing? + +CZW: Yeah, exactly. + +SYG: Gotcha, thanks. + +CDA: That’s it for the queue. You have two minutes left. + +CZW: So seems like—still asking for consensus for 2.7. + +MM: I’m not prepared to agree to 2.7 for this meeting. + +CZW: Then can I ask for Stage 2? + +MM: Yes. + +CZW: Thank you. + +CDA: Nicolo can you be brief? + +NRO: We will be sure to bring this to TG3 to discuss— + +MM: I couldn’t quite hear that. + +CDA: Nicolo said they will bring this up at TG3. + +MM: Excellent, thank you. + +CDA: That’s great. The ask is for committee to approve this for Stage 2 and you had support from dLM for Stage 2, from MM for Stage 2. Anyone else with explicit support? I think from SYG, if i recall correctly. + +SYG: Support is a strong word. + +CDA: Fair enough. On that note, are there any objections for this for Stage 2? Not seeing anything new in the queue. Not hearing any voices. All right. So we’ll say it is Stage 2. CZW, can you dictate a summary and key points for the notes. + +### Speaker's Summary of Key Points + +CZW: Yeah, the change has consensus to Stage 2 with the first row as defined by current HTML specification defined and we will bring this topic to TG3 for further discussion. And tests will be added to WPT before we advance to Stage 3. + +## Array.isTemplateObject for Stage 2.7 + +Presenter: Daniel Ehrenberg (DE) and Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-array-is-template-object) +- [slides](https://docs.google.com/presentation/d/1PtAFnHj7OxGMVekvChntoOJ6RzAly9iTGjUThrHQD9o/edit#slide=id.p) + +CDA: Next up, DE and JHD. + +DE: This is something that I’m working with JHD on as part of I think our shared interest in brand checking operations. So this feature marks a template object as a special kind of object. Just to review, when you have a tagged template like this, with the backtick, but then you have a T before, that’s a function that gets the funny frozen object passed as a parameter. But sometimes you might want to know if the thing that was passed was really one of those template objects. This can be useful for trusted types. So if you want to have a way to programmatically create certain kinds of templates or literal code or HTML but be able to verify it was literally in the JavaScript code, that can be useful for certain possible policies that are used to prevent code injection because if you also make sure that all the code you run comes from the server, then this string was passed down. So this makes it possible to have certain trusted type policies that might be a little bit easier to deploy. A little bit more composable. + +DE: Previously we discussed whether this should have a realm-independent or realm-dependent brand check. The spec text is using an internal slot. MM has raised the possibility that we do this, instead it will return false if you get a cross realm template. So we considered this kind of proxy piercing. The template map version was kind of hard to understand the spec even if it might be possible to deploy it. So the current status of trusted types is it's going ahead without literals which I think is a little unfortunate because it won’t initially be as easy to deploy. But we can help them by adding this capability back. + +DE: So the question is, should we care about distinguishing realms? I think for security—for implementations cross realm brand checks are slightly simpler to implement. Basically we would store a reference to the realm inside of the template tag so you do this check. Because the template map is not a real thing in implementations. As far as security, it depends a bit on the security model. So with the kind of trusted types security model, this is not really a factor when you have different windows and you pass things through postMessage, literalness won’t be propagated. There’s no risk of like a cross origin window injecting some literal thing into you. For ShadowRealms, with callable boundaries, that will prevent you from passing anything that shows as literal. When you have a compartment-based membrane, you will reuse realms. So the realm check won’t actually be that useful. If you do have a compartment system that does use a realm for compartment, it might be a little convenient to have this check be more strict if your security model depended on something not coming from the other realm. But first that logic can be inserted into the membrane and second among these other four use cases, it doesn’t really come up. I’m not sure if someone is especially promoting that deployment model right now. Out of simplicity, I would say—weakly prefer to stick with cross realm and we as a champion group prefer that. That’s the current proposal for consensus. + +DE: For naming, the `array.isTemplateObject` is a slightly confusing name for people. It doesn’t really resonate with people. Even though it only returns true on arrays because it is template on arrays it does not ring for people because not an array operation. We want to call it Reflect.isTemplateObject. The other thing is maybe more serious making something on the ray if you’re going to build kind of a compartment SES environment, it’s kind of annoying to hang it off of array, because then you have to fork it for each individual thing. So that’s kind of assuming if the environment wants to patch the world to make it a realm specific check. So our options are to keep Array.isTemplateObject, to rename to `Reflect.isTemplateObject` or maybe it could be a global function. So global function might be easier to fork, but Reflect is nicely out of the way. In is kind of an advanced feature. I think it makes sense to think of reflection. That’s why we’re proposing Reflect.isTemplateObject. So Stage 2.7, you know, the naming outcome, I hope we can draw a conclusion today. If not, maybe could be 2.7 conditional on the later naming outcome. But is there a queue? + +MM: So just since I raised the awkwardness issue with regard to Array.isTemplateObject, I just want to point out that `Reflect.isTemplateObject` is exactly the same problem. You have a different exportment that differs only in the member. The things, the only options that solve the compartment awkwardness issue is something that is per compartment, `globalThis.isTemplateObject` would be ideal. Another option that I don’t think anybody particularly likes to make a global just to show the principle `eval.isTemplateObject` is final because there’s an eval function per compartment as well. + +DE: I’m aware that Reflect requires forking the Reflect object. But what we had discussed previously it’s much easier to fork a namespace object than the array constructor that is pointed to by the undeniable intrinsic array prototype. + +MM: That’s true. Okay. I accept that. + +DE: That was the rational for why this makes it less bad. + +MM: I agree. It’s less. + +DE: I think Reflect makes sense in a semantic way. That’s why I prefer putting it on Reflect than putting it on eval. + +MM: I prefer the global. + +DE: The global, okay. + +MM: globalThis.isTemplateObject. + +DE: Is that a strong preference? + +MM: Compared to reflect? Certainly have a strong dispreference for array. But it sounds like we’re agreed on that. + +DE: Yeah. + +MM: I don’t know that it’s a strong preference. + +DE: The reason that I disprefer a global more than I disprefer the eval dot thing is because I think that just kind of points people at it too much, it makes it look too much in people’s faces when they’re searching for APIs and reflect kind of illustrates this is a reflection API that you don’t usually use. + +MM: I’m not able to form an opinion about Reflect in real time. I need to chill on it. + +DE: Okay. Is anybody else on the queue? + +CDA: Yeah, SYG. + +SYG: This is more of a question directed at MM, I’ll first try to recap what I remember from when we discussed `Error.isError` and I’m trying to wrap my head around is there a design principle for when a brand check ought to be in your opinion realmful or realm independent? And if I recall correctly, what you said yesterday for `Error.isError` is in practical membrane implementations, errors are copied across the membrane. Is that why— + +MM: No. Thank you for the question. This is a good thing to take a moment to clarify. The objection here, I mean, all of the issues around brand checking and all that would be relevant here if there wasn’t something bigger that’s overriding relevance here, which is whether this is a brand checking issue at all. The purpose of this is to make a trust decision. The trust decision as I think Matthew is going to be expanding on is really about the provenance of the template. Did this come from code that is code reviewed with the code doing the check? And if the template comes from outside that scope, then you want the test to fail because if the test succeeds, then it’s misleading the purpose of your security check. So that’s the overriding issue here. If that was not an issue, then all of the normal issues with regard to practical membrane transparency, et cetera, would still need to be examined. + +SYG: Okay. That helps a lot. Thank you very much. I have some questions about that. But I would prefer to hear the rest of the queue first. + +DLM: Thank you. So we discussed this internally and also reached out to some security folks as well. So we’re not fully comfortable with this at least in terms of its presented use case. It feels complicated and kind of easy to mess up in terms of getting trusted types set up properly and I believe having to freeze the object and then also get this test going. So this sort of complexity is a bit concerning to us and I think this might be error prone for developers. The other side of that then it makes it feel like a bit of a niche use case for the difficulty to get right and not used by the set of people. I guess moving to my question, I’m just wondering is this actually useful outside of trusted types or is this solely for trusted types and sort of motivation behind that question is if this is a niche use case we’re basically going to be adding a slot to every array instance as far as I can tell to be able to track this so everyone is going to be paying a cost to support something that relatively few people end up using. + +DE: When you say a slot, what do you mean? + +DLM: I’m assuming the going example of the JHD’s error work where we’re adding something to test the error instance itself whether or not it is actually an error. In this case I assume any array would then to have some memory usage attached to it to make sure it came from a template object or not. + +DE: So, yeah, I want to talk about this implementation part, but then talk about the earlier parts also. + +DLM: Sure. + +DE: Because this implementation strategy is different from what I expected. For Error.isError, I thought you already knew whether something was an error based on however you’re representing its stack or whatever there. For this, I thought it could be kind of a different map, a different hidden class. It would be bad if every array took an extra word. I agree. + +DLM: Yeah, fair enough. I will leave it there. So that is true. We didn’t fully look into implementation strategies. We just had concerns about implementation costs in general. I would also like to hear your answer to the first part of the question. Is this just trusted types or do you have use cases in mind? + +DE: So your question had a few different parts to it. + +DLM: Sure. + +DE: So just to close out the implementation part, if we conclude this takes an extra slot per array in practice, that would be fatal in my opinion and we shouldn’t go ahead with the proposal. Now, I want to address what you were saying about difficulty using the API. Part of how this came up in trusted types was trusted types was going to expose a built-in policy option where it has a template tag and it can be used automatically to wrap HTML or script. So that addresses the ease of use. Then we have a design question about whether we want to expose this as a primitive in JavaScript or whether we just want to set the internal slot only for use in trusted types itself? I think setting—I would kind of be okay with either. But it feels more compositional to expose the primitives. That’s kind of the direction we’ve been going in with JavaScript and the web platform in general. Even if certain APIs are difficult to use and provide an important capability we provide that and accompany that by a wrapper that does make it easy to use. Do we have use cases that aren’t related to trusted types? So I think knowing whether something is literal is kind of related to CSP. I can’t really think of when you care about this that isn’t linked to trying to understand what’s in the original JavaScript source code. That’s kind of what the feature is for. There are many possible things in that space besides trusted types. Generally, for example, if you wanted to build a SQL library, so you want to prevent SQL injection and there’s an issue with string concatenation, you could make a SQL and verify that somebody called the library to instruct it as a template tag rather than faking it. That would only kind of make sense if you had a user of your API that you really don’t trust that you think is going to hack around you. I’m not sure how realistic that is. But that is an example when you might want it. That’s just off the top of my head. + +DLM: That’s fine. Can I just—I think you answered this, but I just—I blanked for a moment. What was your response to the concern about this being complicated to write in terms of making sure that you have trusted types set up and another step in there that I’m blanking on, I’m sorry, and then using this new API? + +DE: So I thought Mozilla was in favor of trusted types. It’s true that trusted types being hard to set up is a concern with it that one might have. If you think trusted types is a good thing to do, then I think this fits in with that. I don’t know all the details of Mozilla’s position. + +DLM: No. Neither do I. I’m doing my best to express it as good as I can. But obviously I don’t know very much about trusted types and something that I need to educate myself on. I don’t want to take too much time on this. Thank you for your answers. + +DE: So we don’t have to go to Stage 2.7 today. How would you like to follow up here? + +DLM: So I think based upon my internal discussion, I would like to get a more thorough review from security people and then follow up. I think I can get that done in time for Tokyo. But I don’t think I’m comfortable with 2.7 today. + +DE: Okay. That sounds great. Previously I thought Mozilla had been in favor of—I could be wrong about this — of trusted types containing from HTML or whatever built-in tag that it would have had. The reason that it was removed wasn’t any of those concerns, it was because there was an attempted implementation of the per-realm brand check and it didn’t work out. Not that that is not implementable but that that particular one didn’t work. It was kind of too high overhead. + +DLM: This is really outside of my area of expertise. What I can say is that, yes, we are in favor of trusted types but we have some concerns about this particular JavaScript API and I will do my best to get those resolved before the next plenary. + +DE: Okay, great. I’m happy to be in touch if that’s useful. + +DLM: Sure, thank you. + +CDA: All right. Noting we have less than ten minutes left. SYG, did you want to— + +SYG: Please skip me. + +CDA: Mathieu. + +MAH: Yeah, I would like clarification on the use case. I mean, I got a little bit more detail here. But from what I understand, this is a case where we expect authors of trusted types to use a type of the object, a template tag, and make a decision based on the type to assume its provenance. Is that correct? + +DE: Yeah. + +MAH: That really seems contrary to what trusted types is trying to accomplish, which is that if you have a capability of executing something, you get to execute. Here we are back to—oh, I recognize that this seems like a recognition problem. And I don’t understand how— + +DE: So the constructor, the particular template tag is the thing that conveys the capability to make a literal string that’s treated as code that can execute. It still is kind of capability-based because in addition to checking that it’s one of these literals, then it will create some other object that has the brand that’s conferred by the tag. + +MAH: So if you need to have the tag itself, why can’t the tag itself convey— + +DE: Because we don’t have the primitive in the language to check literalness. So as I was saying to DLM, we don’t actually need to have this primitive in the language. We can just say template tags have this extra internal slot tagging them as this. And that would be enough. Previously, the trusted types effort failed here because they tried to follow our advice about this check being realm specific which they didn’t manage to implement in an acceptable way. + +MAH: I still don’t understand that. I think what you’re saying is that the tag itself would be checking for the type. + +DE: The tag checks for literalness and returns something that has a type. Later when you try to eval something, that will check that it has the type. So we’re not going for Stage 2.7 this meeting. We discussed this at a previous TG3 meeting and I will bring it back for another TG3 meeting. + +MAH: Sorry, I might have not been there that meeting. + +DE: Yeah, I don’t think we went over all the context. I think we focused on the realm check and acceptability of that and that was maybe more focused on the membrane security model than the CC1 web security model. + +MAH: Thanks. + +CDA: All right. KM is up with slight preference for Reflect as well. LEO? + +LEO: Hi. Yeah, I think JHD was just answering my question here. I was looking for any precedent as reflection like should we actually have it—if it goes on Reflect, should it be captured by proxies as well? + +DE: It should not be captured by proxies. + +JHD: I replied in Matrix that in a previous plenary in the getIntrinsic discussions we got consensus from the group that Reflect would no longer be limited to proxy traps. Nothing yet has landed that isn’t one. But there is no longer that barrier. + +LEO: Yeah, if that’s the case, my pet peeve is no longer sustainable and nothing that I can complain here. I had a slight preference for the global name for this case but I don’t think it’s a blocker. Thank you for capturing that on the notes. + +EAO: Thinking about this from the sort of attacking point of view, as I understand it, we want to be sure that whatever we think is a tag template, that the array of strings is indeed coming from a tagged template, this means that in order to kind of preventing an attack where somebody is providing an array of strings that they generated otherwise and would like to have that be treated as whatever we are trying to trust. So what I’m wondering is that does this check really do anything? Because if I’m an attacker and I can provide an array of strings, wouldn’t it be likely that I would also be able to provide an array coming from another tagged template literal that I’m providing and that this would kind of pass through this check without any issue? + +DE: So imagine that you’re a software developer on a platform that has, you know, eval blocked and you’re trying to ship some code. And you figure out something that you could do if only you had access to eval. So you could then construct some code that does this thing of making this array and freezing this object just to get your thing out the door and you say I will fix up the security thing later, that’s the kind of thing that this would defend against. I see where you’re coming from that this is maybe a little too advanced and there was actually like “isTemplateObject polyfill” in quotes that some people at Google had made that does just check is this a frozen array that looks like a template tag? JHD took that out of the repo. I think we’re both offended by the inaccuracy, or at least speaking for myself, of that polyfill. But in practice, it might work. It doesn’t work against that threat model, but to get the code out the door and not worry about the security policy right now. But we’re not defending against this arbitrary attacker because you’re already running code. But this is like the point of trusted types anyway. I mean, trusted types at the most basic level, it makes it so you can’t just throw strings around everywhere. So maybe just making it more annoying is enough. But the thing is that in practice, people make these policies today where they compile their whole program, they put all the strings together into one place and then they have this function that checks is this one of the set of like all the strings? And that is unfortunate. So that policy is sort of completely correct. But it requires this weird compile step. + +CDA: We are at time. I’m noting MAH's comment sounds related to the question of provenance and maybe discuss that in TG3? + +DE: What questions on the queue? + +CDA: One last thing from KG. + +KG: I’ll ask in Matrix. + +DE: If you could both state your questions even if you think it is redundant, I’m curious what it is. + +KG: You just said people are compiling the whole programs and putting all the strings in array. Who is people? + +DE: That’s something that I read on an issue tracker. I don’t know if anybody is actually shipping a site that way. I may be wrong there. + +KG: Okay. + +DE: What was your question Chris? + +MAH: It was mine. I was just clarifying that EAO’s question is related to mine regarding provenance and we should have a question in TG3 for the trusted type use case there. + +CDA: Could you dictate a summary and conclusion for the notes? + +### Speaker's Summary of Key Points + +DE: We discussed this proposal and one thing that the committee agreed on was the name `reflect.trusted` types—sorry, Reflect.isTemplateObject. On the cross brand check we didn’t hear any particular concerns. The two things to follow up on are for SES folks is figuring out—talking through what it’s trying to do and with Mozilla folks also understanding how this relates to their goals with trusted types, as well as implementation costs and complexity for users. Are those the two things that I should follow up on or is there anything more that I should follow up on? Is it accurate to say that we’re sticking with cross realm and we agree on the `reflect.isTemplateObject` name? + +MM: I’m not going to—so I’ll go ahead and say I’m agreeing to the naming. I still think the cross realm thing for the reasons you stated, it’s not fatal to do cross realm but I still think it’s misconceived. And as I thought I pointed out in the last TG3 meeting, given an implementation of the cross realm check, there’s a trick that turns that into a per-realm check that’s essentially free. So I don’t believe the implementation impediment either. + +DE: Okay. So we will leave that open as well. So we have concluded on the name change and we need to—I mean, given that trick that you said, the cross realm check implements a per-realm check so it— + +MM: It requires the trick to be moved into the built-in so it doesn’t expose a cross realm check. + +JHD: So we can discuss this more another time MM. It was also sort of ergonomics and consistency thing given that almost everything else in the language is— + +MM: This serves a different purpose. The purpose that it serves is broken if you let it say true for code that’s not co-reviewed with the code doing the check. + +DE: Okay. I listed here what I thought were the four kinds of use cases. Let’s discuss in TG3 if there’s a fifth usage mode where that does end up being useful. As well as MAH’s kind of broader question. + +MM: I think it’s related to MAH's broader question. + +DE: Good. Thank you. We’re still at Stage 2. + +## Continuation: Atomics.pause for Stage 3 + +Presenter: Shu-yu Guo (SYG) + +- [proposal](https://github.com/tc39/proposal-atomics-microwait) + +SYG: Thank you. So my plan for this continuation is to kind of zoom back out and to give the motivation for why there was an iteration number parameter to the pause method to begin with to try to clear up some of that confusion and hopefully to reiterate the broader goal that I was trying to achieve. And then go into a PR that hopefully addresses the confusion around that parameter and then finally MLS from JavaScriptCore also raised some concerns I would like to discuss in committee. + +SYG: Okay. So zooming out for why this `Atomics.pause` method has a parameter at all instead of just a no parameter null function that does a quick CPU pause, so if you look at a CPU or look at lower level language like C++ that allows in line assembly you would use intrinsic that doesn’t have arguments that executes a pause or a yield instruction. So why don’t we just also do that in JavaScript? My motivation for why to not do that is because JavaScript has highly variable performance, meaning that when you are running code in the interpreter, it takes a very different amount of time than running the exact same code in an optimizing JIT if the code is hot in the case of a loop arguably is hot and then optimized and then once optimized it runs faster than the interpreter. If you’re writing the look and writing something with the spin loop, you want the amount of time that you pause the CPU to try to get a contended lock to be mostly the same. You want that amount of time to be mostly the same between the interpreter regardless of whether your code is executing in the interpreter or executing in the JITS and if we had a single method with a single pause, that would make that more difficult. So you might think, well, okay, you still don’t need a parameter to do that. You could, for example, just say once it is optimized in the optimizing JIT `Atomics.pause` in the optimizing JIT emits ten pauses. But that makes it difficult to implement backoff algorithms which are often present in spinwait loops. Those issues basically motivated the design of taking a parameter that shows—that signals to the implementation how many times have this particular spin wait loop paused. And then engines can use that parameter to scale the number of pauses in the amount of time to wait if it thinks it’s a good idea for the particular CPU depending on both the value of the parameter itself and what tier it’s in. So that’s why this thing exists. And after chatting with folks, I am happy with the semantics that WH thought I was presenting which is that the larger the number here, the longer the time paused. + +SYG: So I have a PR that spells this out. This is Number 11. Hopefully this sentence is less confusing than the one I previously had. So larger N means longer time paused. This PR also adds some more informative notes about things like, you don’t want a user to wait an arbitrarily long time obviously. So it is recommended that implementations have a static internal upper bound on the maximum time paused on the order of hundreds of nanoseconds, this is designed to be very, very short. And if the user wanted to implement back off strategies, they could do so by manipulating this n parameter. If they want linear back off they can pass linearly increasing N and if they want reverse back off which is my intended original semantics where later iterations would wait shorter, they would simply count down instead of count up. So this one parameter is pretty versatile and I think does cover all the use cases I had intended for it to cover and hopefully is less confusing if we directly make it control the amount of time waited. But I do have to caution again, since the only thing that it controls is timing, and timing is not technically observable behavior. There’s really no way to test for compliance but we will of course have a sentence that says what the expectation is. Before we move on to the rest of the discussion about the concerns from JavaScriptCore, I would like to go to the queue to see if there’s any questions about this clarification. + +KM: So I guess I have a question whether the timing thing, are you also—you know, timing can also change if your code is instruction cache or heterogeneous SOG and high performance cores or low performance cores you can have slow execution on the loop on the one high efficiency core. Are you just worried that the interpreter would still have a more significant variation than even the slow core? I assume the answer is yes but I just wanted to clarify. + +SYG: My intuition is yes. I don’t have a lot of experience with heterogeneous cores. I don’t know how heterogeneous we’re talking about with the efficiency cores. Yes, you’re correct there’s certainly a lot more clauses of variance in JavaScript performance than just the tiering. + +KM: I didn’t mean just in the tiering. I went for a C++ developer even if you’re not writing in javaScript, you still have to consider at least today on specific platforms that you might be—your system might have a heterogeneous core and you could be running on a low—a very high efficiency, low performance core? + +SYG: I don’t have personal experience with that. I mean, I don’t think you deploy different binaries depending on the core because it’s automatic. The code could migrate to the efficiency core or stay on the high power core. I don’t know what the best practice for that is. I don’t quite know what the question is. + +KM: I guess it was a question of whether—my original question is whether you think that’s more significant than that, but it seems like the answer is it’s not clear. + +SYG: Yeah, I think that depends on the chip, right? But the tiering is kind of like a known quantity for developers of VMs. + +KM: Yeah, totally. I just mean like do we think this would be more significant than—being in the interpreter would be more significant than migrating some C++ code to an efficiency core? + +SYG: Right. + +KM: Because I agree there’s—I don’t think any platform provides a way to know which core you’re on. I don’t think you ship a different binary or anything any way. I don’t know anybody doing that. I wonder if it’s even necessary. + +SYG: So I think the general form of the question is if you’re running somewhere where this doesn’t do anything for you, then you literally would implement this to do nothing. And speaking for V8, what I would end up doing is like in the implementation of this method, there would be a bunch of IFDEFs being on architecture and if there’s X 86 that are heterogeneous and it has something. On the M chips I won’t do anything until I see evidence to the contrary that is helpful or something. + +KM: Okay, cool. That’s fine. I was trying to understand your thoughts there. Thanks. + +CDA: WH. + +WH: This looks good to me. You have some minor typos in here. The one question everybody will have is what the constant of proportionality is. If you’re scaling up either linearly or exponentially it doesn’t matter. If you’re scaling down, then it’s not clear whether a value of 10 means 10 nanoseconds or 10 microseconds or what. + +SYG: That’s a great question. I think there will be—as I said, I think all implementations will have an internal static cap basically. You can’t emit 2 to the 53 pauses, that will be bad. So I think to be safe given that the full range here is integer number, you would start counting at like 2 to the 53 and like count down and then let the internal cap kind of somehow do that. Like, I don’t know what other reasonable upper range to start at is if you want to count down? I’m hesitant to say something like implementations can give specific guidance. But that is not something we want to commit to either. + +WH: Yeah, well, users will—anybody who wants to count down will face this problem. I’m not one of the ones who want to count down so I don’t care as much. But folks will reverse engineer what implementations do with this. And then there will be the issues if you have two different implementations with radically different constants of proportionality. + +SYG: I recognize the risk. I’m not sure how bad divergence is given the amount of variance in the system otherwise. I recognize that. And I take the point. It could be that it is just unrealistic to expect counting down to work, and then we further restrict this with guidance that says you count up. + +WH: I’m glad you got rid of the sentence that says that the user is expected to do something. Users will do what users do. We can’t write a spec that says that the user shall do this. + +SYG: Fair. + +WH: Thank you. + +KM: I guess in my experience with most of these kind of spin waits, especially for like a spin lock, like spin lock into park, they tend to have a pretty small count any way. So if you had a cap that was like a thousand, that’s going to be bigger than anyone is ever going to count from which I think is the most common use case for a spin lock it’s not clear that you want to continually get slower and slower any way. So I don’t know. It seems like you only ever want to count up if you’re just doing the completely idle spin wait. You wouldn’t want the time to get smaller and smaller the longer you spin forever. I agree with the overall point, I guess. I see your point. + +MF: So you said earlier that you expect the user to be able to implement varying backoff strategies using this parameter, but my understanding would be that that would require a constraint on the parameter to be linearly proportional to the number of waits. So if it was non-linearly proportional, the parameter would have a non-linear effect. From the text I only see a monotonicity guarantee and not a linear guarantee. Is that an oversight or am I misunderstanding? + +SYG: There is no linearity guarantee. + +MF: How as a user would I implement and do linear backoff? + +SYG: You signal the intent of increasing values of N. If the interpretation wants to increase N in the nonlinear matter you’re out of luck for the implementation because it decided the best internal thing is never to give you linear backoff. But your code communicates the intent that you want linear backoff. You cannot guarantee linear back off. You can guarantee this code will always have linear backoff because the performance variance of JavaScript is just too high. + +MF: How would implementation distinguish between me trying to communicate polynomial back off to n back off. + +SYG: N is linearly. + +MF: Over at least three invocations. + +SYG: You have to see enough to notice the pattern. + +MF: That doesn’t seem great if we’re just trying to—if there is a loose connection between what the programmer is providing and what the implementation is actually doing, maybe the backoff strategies should be communicated more directly alongside this number. + +SYG: I do not want to expand scope of this to do research of backoff strategies and then pass like an options bag or something describing that. Like, the cycles to check for that would vastly dwarf the number of cycles to wait. + +MF: That’s fair. + +SYG: I think that would defeat the purpose of this API. + +MF: I don’t think that the number alone is effective at communicating that. + +SYG: Yeah, that’s totally fair. But it’s just like even if you take a time stamp, for example, the number of cycles to get the time already dwarfs the amount of time you want to wait. There’s very little to work with. + +MF: Okay. + +EAO: Right now, the range of values for N dot are conceivable are large 1 or 2 to the power of 53 and my sense is it would make this much more predictable from the developer point of view if you want to count down? The maximum N is much less like a thousand or so and a thousand and 24 would also make sense. + +SYG: It is just going to be capped anyway. It is up to the implementation to make it predictable or not predictable. If it’s capped at a thousand doesn’t serve an advantage. + +EAO: What I mean if you want to count down and you start from 2 to the power of 53 and then minus 1 and minus 2 and minus 3, the whole range there, is kind of huge. So the step size is really difficult to imagine how do you implement that ever and what does this mean? + +SYG: So now that you say this, it occurs to me when I wrote this new PR, it did not occur to me then but it occurs to me now the reason this is nonnegative because the original intention is you pass the iteration count that would also be increasing positive. There’s nothing preventing us from giving it the full signed integer format and you simply count down by giving negative integers. How does that sound? + +EAO: I like that. + +GCL: I’m happy with this overall. I just wanted to put that out there first before asking the question. So it sounds like there’s a lot of complexity over how this parameter is interpreted. And if I were writing something that spins in this way in like C++, it would represent this with like how I—like you said, there would be inline assembly and I would use the instruction i want. And I would just—like, the instruction itself is not parameterized with the iteration count. You just invoke it the number of times you want to invoke it or in the places you want to invoke it. Why does this function not behave in that way? + +SYG: For the reason that I led with this session with. That there’s too much variance in javaScript performance because of interpreters and JITs. It is reasonable to have that performance mental model that you say for C++ if this thing were in the optimizing tier. We would inline calls to `Atomics.pause` to a literal single pause. In the interpreter the call overhead is huge. + +GCL: Thank you. + +SYG: Because of that built in variance it’s very hard to predict and impossible to predict basically I think we can recoup the predictability with this. Going back to this, in C++ this wouldn’t be the single call to pause, this would be an inner loop that backsoff how many times to emit pause. + +LCA: So I had the same question as some other people with the counting down. I think that’s mostly answered now. I like the idea of starting at zero. Alternatively I also like starting at thousand and counting to zero. I do wonder if you think that an engine is generally not going to have the value that you give it as truth or rather me use multiple of these after each other to infer you want to do exponential backoff or whatever, why do we need the parameter at all? Can’t the engine determine that you called this instruction for the third time and instead of giving you control over what backoff you want, it just has one of the backoffs internally? Because it feels to me like what you’re saying is actually the engine is not going to generally care about what you tell it and instead it’s going to do its own thing any way, but maybe I’m misinterpreting that. + +SYG: I think the engines would appreciate an intent even if it chooses to not follow it. But it’s a fair question why doesn’t this track some internal thing by itself? And indeed some VMs that this CRL does track this internally count internally but that is more complication and I think I would like fewer heuristics for things like this in the engine. Like, it seems reasonable to me that something like script in would want to communicate whether it wants linear backoff or exponential backoff and if you only infer it doesn’t has to choose and doesn’t know the spin wait loop the user is intending. It’s not like a categorical distinction because the engine has to do heuristics and I think this feels a better chance to let the engine try to do what the user intends to do. + +LCA: Okay. So I guess what you’re saying is the heuristics will be there anyway. But rather than specifying all of the possible backoff strategies beforehand, the engine decides whether it can match the pattern that you’re giving it to a backoff strategy that it implements and then it uses the backoff strategy. + +SYG: I mean, this I just tells you how many times it’s waited basically in this loop. If it wants to—sorry, that will just muddle things. But more or less, yes. + +LCA: Okay. + +CDA: WH. + +WH: In response to this: The implementation just gets a call to `pause`, it doesn’t know whether the user is pausing in the same loop or whether they went on to do something different and then are starting another backoff loop or if they’re ping-ponging between two backoff loops and interweaving them. + +LCA: With the thing that SYG said where the engine could determine with some heuristics based on the parameters you’re giving it whether it’s linear or exponential backoff would have to make the determination. You’re saying that’s not possible, then. + +SYG: There’s more. I think WH is saying I didn’t really want to take too much of a tangent here, but if you don’t pass in I directly, if you don’t pass in any kind of number directly, I think the heuristics to be fully robust would need to be somehow—needs to know the cord sign and needs to know the context switches and needs to know did it switch threads? Did it pass I directly even if two threads were hammering in the spin lock and their Is are already thread local and separate Is and you get different Is it does a heuristic and the last call is on this thread and looked like this number of cycles is a go and there was no context switch and therefore we should change this backoff strategy. It’s just more complicated heuristics. Does that make sense? + +WH: My point was simpler than that. You call `pause` until you actually get the resource you want, and later you start waiting for another resource. So you start calling `pause` again but now you’re waiting for something else. Without a parameter the implementation wouldn’t know that unless it’s trying to dig into your program to see what is on the stack and who is calling `pause`. + +CDA: We are past time. + +SYG: That was 30 minutes, really? Okay. Unfortunately if we’re out of time we didn’t get a chance to go into MLS’s concerns. So let’s see. To capture for the notes, my plan is to make a change to this PR to allow the full signed range of integers to allow a more natural way of counting down. I will come back at the next meeting and have discussion with Michael’s concerns but to anticipate one of his questions on the queue, I will also add a thing here that makes it basically contingent on the rest of—sorry, that it’s not a side channel. There’s a question about is this a side channel? It’s only a side channel if you can observe the timing. You can only observe the timing if you have multithreading and for the web at least that is cross organization thing and this is no different from that. But I will come back in Tokyo. Thank you for the discussion. + +MF: I don’t know if I should make a point of order about this or not. Should we be demoting this proposal? + +SYG: Why should we be demoting this proposal? + +MF: It’s currently 2.7 and going through major design reconsiderations. + +SYG: Yeah, okay. I think these are things that—I mean, I feel a little uneasy because the point of confusion is cleared. + +CDA: In any case, we need to move. So I don’t know if we have time. + +DE: I think we have to decide whether or not we’re demoting this. I oppose demotion because I think this is a very minor editorial point and we should continue to affirm that this is a good proposal. + +MLS: I somewhat disagree. Stage 2 affirms it’s good in the language. I don’t think 2.7 conveys that. + +MF: Strongly disagree with DE that is editorial considering significant— + +SYG: Strictly speaking again this is editorial that it is not observable behavior. We’re talking about expectations here of— + +MF: We’re considering changing the API. + +SYG: We in fact did not change the API after clarification, it still takes the number parameter. + +???: That’s correct. It has not changed. + +MLS: It changes semantics of the API. + +SYG: But the semantics are not observable. + +WH: Negative numbers are observable. + +SYG: That’s fair. I’m happy if the core reason to demote this is I want to expand this to count negative numbers for counting down, then I accept that as a reasonable API change thing to demote to stage 2 but like I don’t know. Like, these are fairly—like, we change stuff all the time. Like, the spirit of this proposal has not really changed. The scope has not really changed. + +DE: I wouldn’t block demotion. But I think it’s important that we figure out how to deliver on this proposal. + +CDA: Let me ask this question: What happens in October? Like, if this goes—if we say we want to move this to 2, is it coming back in October for 2.7/3? Because if that’s the case— + +SYG: Regardless of the stage it ends this meeting at, I plan to ask for Stage 3 in October. I will update the existing Test262 tests that have already landed to accept negative numbers. + +CDA: Okay. So that makes sense. What I’m getting at is what substantive difference will it make whether we move this to 2 or not at this point for this specific proposal? + +SYG: I think that depends on how delegates interpret how much signal a particular stage sends? + +MM: I favor demotion. I don’t think it needs to slow down our progress. It sounds like we’re all agreed on that. You know, the overall progress towards landing this. And with regard to the signal, you know, signals—on this one, signal will get misunderstood in either direction. If we don’t feel like we entered this with settled semantics, then to stay with 2.7, sYG corrected me earlier when I was thinking of allowing 2.7 on something before I was confident of the semantics. SYG was right to stop me from agreeing to that. I think that’s the case here. We should not dilute the meaning of 2.7. + +SYG: There’s a narrow legalese point that I keep making that people are not responding to. This is not observable semantics. What is the argument that this is a semantics agreement? + +LCA(?): The negative number thing? + +SYG: Okay. Right. The negative number thing is fair. But the actual disagreement is not the negative number. It’s the timing which is not observable semantics. It’s weird if the goal you want is to demote because the subtantive disagreement is on timing but that is not a semantics disagreement. + +LCA: If absolutely no semantics attached to the argument, there would be no argument. I don’t agree with you. There is definitely semantics. + +CDA: We are going to need to move on. We’re well past time. + +SYG: I’m not asking for demotion. Some people would like demotion. If we move on, I don’t know what it means. + +### Conclusion + +CDA: We neither have consensus for Stage 3 nor Stage 2. So I say we just call it and let’s move on to the next topic. + +## Updates from TypeScript: deferred and immediate + +Presenter: Daniel Rosenwasser (DRR) + +- [slides](https://onedrive.live.com/?authkey=%21ANzaoMgiLDZwOCw&id=5D3264BDC1CB4F5B%216170&cid=5D3264BDC1CB4F5B&parId=root&parQt=sharedby&o=OneUp) + +DRR: Great, all right. Let’s get things kicked off. Hi everyone. I am Daniel and work at microsoft on the TypeScript team. Today I’m here to talk potential TypeScript syntax and not a proposal for anything related to, you know, adding features to ECMA script or JavaScript but this is more of an update here to raise awareness within the committee of some work we’re doing on our side. Understanding any possible concerns. And if this is something that you all find useful, then we can do more sorts of things like this in the future too. So for some background, TypeScript has what is called control flow analysis for types. So I am assuming a little bit of TypeScript familiarity here. But the concept is that if you say that a variable has a type, for example, let’s look at padding in this example, padding is declared to have type string or number and when we reference it it is going to be string or number. However, that also means whenever we’re expecting something like a string, we should get an error, right? + +DRR: Now, we can add some checks that actually affect the type, the observed type in different locations by performing certain code checks. So, for example, what I have here is I have an if block. The if block performs the type check and says if the thing is the number and then within the block padding is observed to have the typed string after the assignment and then after the entire if block, when we join all the different cases together, when the thing is a number, we basically over in the type to be a string and when it was a string, it was already a string and we join it together and padding is observed to be a string. There’s no TypeError anymore, right? So this allows TypeScript to model existing JavaScript behavior without having to sort of contort yourself into writing very, you know, other sorts of polymorphic checks and stuff like that. As long as you’re writing canonical JavaScript to keep up in places and follow the type checks in run time and model that in the static analysis that we do. So our control flow analysis is good, but it doesn’t catch all errors. That is because our tech analysis is limited by constraints around, you know, performance, right? Being able to do these checks in a pretty speedy way. Obviously, you can’t model all sorts of errors, because there is all sorts of limitations around—around basically being correct and complete. So if you were truly striving to say, like this thing is always accurate, but never gives you access errors you have to solve the halting problem. So what we do is take an optimistic approach. We assume that every, that most variable reads within closures are actually just reads. And writes typically never invalidate the assumptions of the control flow analysis. That usually matches what people are expecting when they write their code. + +DRR: So concretely, what that means is if you have an assignment and a function, and the function actually tries to change the type of a variable underneath the cover, like basically pull out the rug from underneath you, we don’t assume that, that invalidates the control flow analysis that we perform. Right? So basically the way you can think of this is as is all control flow analysis is single order or top level where the function is declared. So for example, here we have X of time string or number. We are assigned a string. And TypeScript is actually okay with his code. It will stay upper case is going to succeed. It doesn’t have any issues with this, it doesn’t think it is actually a number. But runtime, when you actually run this thing it is going to cause an issue. Because what is happening you call sabotage, that over writes the value of X with a number and this thing doesn’t work. In practice that is rare. Other type systems use the opposite approach. Flow takes the approach saying any side of enclosure is going to invalidate all narrowing once the thing is called. And people don’t, and the feedback that we heard from a lot of people who use is that, it tends to be very, very frustrating as well. + +DRR: So again, this is what most people have found usually checks the boxes for what they’re looking for. And it’s because most of the time that optimistic analysis is, it does model like the real world. You typically don’t try to remove capabilities. But there are some common exceptions. Right? One is where you try to say, all right, I’m done with the variable, unset the state and say it is undefined or null or whatever. Or when we would try to stack other analysis on top of this sort of thing. So the specific example, maybe a motivating example is: Let’s say that you’re trying to calculate the type of an array, the underlying element type, right? This is sort of a simplification of what a lot of JavaScript engines will do further As, they will do like heterogeneous or object version of this thing, and one that models int32 or an integer or something like that. + +DRR: So let’s say we have an element type and it is modeled by four different string intervals that are possible. These are all valid types in TypeScript. So maybe you have an initial variable that is of type unknown or has the value unknown, the string value unknown. And then, let me get the laser pointer, unknown. Then you loop through a bunch of elements of the array and try to basically say like given the current type recompute what the type is and reassign it over time. And then later on, you check, you try to make a decision based on the type of the underlying elements. So, what TypeScript, this works fine. You basically have this, we’re able to analyze this, we are able to basically say, all right. There’s basically no issue with this code so to speak. But what happens if we try to switch this for loop into a for each? So, what happens is this, when you do that, we now start to error here. Because we say that we have this other analysis saying, hey, we don’t ever see the type of, or the value of type ever change so that we think that the value is always the string literal unknown. And that’s because even though there’s this assignment in this closure, we sort of just view it as, like I don’t know, we basically don’t perform analysis within closures, we don’t factor that in today, right? And so that’s, that’s one of the places where the so to speak optimistic view of the world kind of falls over. + +DRR: So we had this idea of what if it were possible to expand this a little bit and do better, at least in the certain class of constructs? And specifically, the idea was if you ever observe a function expression as the argument of a call, you could consider that as a possible branch of execution and say that body is a possible branch proceeding post-call. So basically, if you’re trying to figure out what post-call is dominated by, it is dominated by precall, this is always the case, now it is also dominated by now body. I guess, dominated is not the right word, but basically it is a possible antecedent. So what that means, you basically in the previous example, now say: Type is actually, type actually has a possible assignment in that for each so everything would just work. Now, the idea is that this would only be syntactic. Right? Syntactic. Right? That is one sort of limitation. But it sort of does capture quite a bit in terms of what people typically write here. It also sort of reflects on what other languages do in this space, too, where you see other languages that have the concept of the trailing block as the first class concept of adding that to control flow. But it also does mean that if you refactor this thing to be, you know, to a constant, take the function expression and move it to a constant or variable of any sort or just turn it into to function declaration, this thing just couldn’t kick in at all. So I guess quickly, WH, you had a clarifying question, can you—if you want to ask? + +WH: Yes. On these slides are you sketching an analysis that an implementation would do or is this intended to be user code? + +DRR: This is intended to be user code. This is some user code a person wrote. The idea is that we want TypeScript to not provide, not to issue an error here. Because if a user wrote something like this for loop, they don’t get an error because the type system can see an analysis at the same scope as the declaration of the variable. Whereas if you capture it and then perform the assignment, currently our type analysis does not dive into the function itself. + +WH: And in this case, TypeScript wouldn’t see the mutation of `type` and think that `type` is the constant “unknown” and that `if` statement is unconditionally false — that’s what would happen? + +DRR: Exactly. That’s exactly what happens here. + +WH: Okay. + +DDR: And so, it issues this error, even though that’s not correct. It is trying to be helpful, but unfortunately, this analysis doesn’t factor in for the case. You will actually potentially assign here. I also—I also see sort of a, it seems like a clarifying question from SYG, too. + +SYG: I’m told to wait, I will wait for the rest of the presentation. + +DRR: Okay. Yeah. So—so we’re exploring a version of this where we actually do analyze all function calls that have a function expression and try to factor that into the control flow of the, of the, of the, of a variable. Right? The problem with just taking that sort of naive approach is that’s also somewhat annoying. Right? It is a conservative thing to do, because you are assuming a branch of code might run. But in reality, in this example, if we have a variable X, we assign X, the string, and then we try to run setTimeout and assign it to 42. You would really hope that this doesn’t error. Because setTimeout, the—the function that we’re passing to setTimeout—setTimeout will happen after the next tick. It will be scheduled after the subsequent code runs. So, that can be quite annoying. So you need some way of opting out of this behavior. And so this should not be an error. But if you take the approach, of hey, this always runs, this always potentially runs it can be annoying. + +DRR: So what we have—and so there’s a class of situations where this might happen. Right? `setTimeout` does asynchronous scheduling for events. You typically don’t expect them to be fired immediately. And then some cases, there is deferred execution, so you want to have something happen and pull a lever and kachunk, whatever that work is gets run. So what you could do here, what you need is some way of saying this callback doesn’t immediately run. So that’s what we’ve been playing around with as, as a keyword is—sorry, we have been playing around with the keyword called deferred. This is specifically, what? Yeah, sorry. Basically, the way that this works is—I really should have done this differently with these—basically, this is a potential TypeScript specific keyword, it is not a JavaScript specific keyword. I just wanted to clarify that. But the idea is, whenever you have a thing that takes a callback that you don’t intend to possibly run immediately, you would have to say that this thing is a deferred callback. Right? And then some APIs don’t necessarily take a deferred yet. But what you can do, you can use an escape hatch declaring a function called deferred, it takes a deferred and then that sort of coalesces into whatever your API would have, you know, taken. So basically, there’s a way of saying, hey, these functions don’t potentially run. Don’t factor that into the control flow analysis within scope. That would be, you know, one possible direction. + +DRR: Now, there’s some challenges with this approach. With—with deferred, we actually tried this approach in TypeScript 5.6 before we released our beta. We decided not the ship it yet, because it seemed like so much needed to be annotated. If you say that every function that doesn’t immediately schedule some work to be done has to be marked deferred, then you have to go through every event handler, every async I/O function, and every special utility function that defers work. That can be quite annoying. And it’s really subtle. Even people who work on the team have to kind of pause and decide, hey, is this a place where you have to mark something as deferred? So, that requires often looking at the docs and the docs don’t always special file like when a function gets run. In theory, this is just a one-time cost where you can annotate all of the standard library. All of like, you know, the node declarations that are shipped in, shipped to users. Sometimes there are broader platforms that do something like this, but in practice there is always some library that indirectly uses those deferred scheduling functions and it can be kind of frustrating. And it’s sort of unclear if—needing to do all of that is really necessary when most people actually like the current behavior for the most part. Deferred is really just to suppress false-positives and we don’t really get a ton of real world bugs, just a lot of people complaining that for each, and math, and whatever, don’t run the same way as a for loop do. + +DRR: So one thing that we considered as going the other way, because people actually do like the current behavior quite a bit, or generally people find the current behavior to be, you know, acceptable. So instead of annotating things like set timeout and all of these other functions, what if we had a different keyword, again, only specific to TypeScript. But something that would indicate when you call this function, this parameter is a possible branch of execution for a control flow analysis. So things like array with, with for each and map and filter and all of these other things, I mean, even if you don’t necessarily intend for them to have side effects, if they do, they can indicate, yes, this function gets potentially run synchronously so you need to factor that into your flow of execution. + +DRR: WH has a clarifying question I want to address now: Possible or mandatory branch of execution? So immediate means possible, but a possible—branch of execution. Right? It does not mean it always gets called. But if it does get called, it will, it will be part of the current flow of things. So yeah. I mean, the next slide actually says this. If it is called, it is called immediately. But for each doesn’t necessarily guarantee that something is called. If you remember the earlier diagram that I showed a way back—it still kind of looks like this the idea. Right? There are two different branches, you know, there’s possibly the body of the call and then the thing before the entire call itself. Or possibly the function before the body of execution. Sorry, i’m trying to run through this quickly, I might have to leave a little earlier. The pros of this is, built in synchronous functions can just adopt this. Right? And you can sort of sprinkle it across an API and if you, if you don’t, it’s kind of fun. Because you basically have what you have toed. Right? So that means it is easier for libraries to sort of gradually adopt it. It is sort of compatible how TypeScript existed for the last 10 years, 12 years, it depends how stable we consider it. + +DRR: It also gives us, you know, potential to grow. Right? So maybe if we see something like immediate it could provide an opportunity to say, hey, perform more than just analysis on someone else’s mic is pretty loud right now. I think that is Gus. Can you mute? But yeah. Can you, can someone mute Gus. I’m hearing like the background. Okay. Maybe—yeah. Possibly gives other opportunities to do more than analysis then just on function expressions. Maybe we can do it on function declarations, too, if we experiment there. And then generalized beyond, you know, possibly called and go to always called kind of like WH had pointed out. And also array methods are actually like most of the cases of what people complain about when they come to our issue tracker and say that our control flow analysis is not kicking in. So a quick win. So if we go with just, I think, a lot of people would be happy. The cons are, for one, it has to be sprinkled everywhere if you want to do it consistently. Or by everywhere, I mean like on so many of the methods of functions that you have, for each, map, and math on like that. It looks clunky over time, and it is less common than deferred, maybe because you would only start adopting it as soon as you hit your first bug, and say, ah, this should be marked as deferred or as immediate. It would make things a little bit easier. And also, if it is not really required it is easy to forget. It is not clear if users would adopt it proactively. It is kind of like a double-edged sword: it is adoptable on a case-by-case basis, but if there is no carrot for you to do it, it is unknown if anyone does it beyond the core container of fungi(?) and running the node JS and so on. So the current direction, we’re leaning away from deferred. It is kind of hard to reason about for most people. It comes up quite a bit. You really need to—annotate everything in. We are experimenting with immediate, because it is more incrementally adaptable. But we also might just do nothing. It’s, it’s—you know, these things don’t catch as many bugs, as we would like to find. But it does provide more ergonomics in terms of what you’re able to express in the language. Right? Because what people will typically, what might be happening for a lot of people, they try to write a forEach, map, or filter and TypeScript doesn’t capture the specific semantics that they intend. So what they either do is rewrite that into a for loop or they just cast the result and get rid of any errors to basically say ‘I know what i’m doing, leave me alone, type system.’ + +DRR: So yeah, this, this is mostly just a kind of heads up and a discussion. I would be interested to hear feedback, any major concerns, any questions I can answer. And also, are these sort of updates useful. We can do more in the future, I know I had to kind of rush through things today. But—yeah. + +CDA: All right. I got some comments on the queue. WH? + +WH: I like this. I agree with the choice of immediate over deferred. It is more immediately useful. This will just move the frontier to the refactoring hazard: people who name their functions will be annoyed that this analysis doesn’t work if the function is not inlined into the call site. + +DRR: Yeah. I also appreciate the immediate pun. If that is what you intended. [ Laughter ] Yeah. Yeah. Thank you. + +WH: Yeah. What do you do on immediately involved function expressions? + +DRR: For immediately invoked function expressions we actually do control as control flow analysis today. They are a special case where we, we do actually factor them in as a definite, a definite control flow node in a sense. So—so the outer, the outer flow, will actually factor in all of the assignments and things like that into the current scope. I—if I had a—buffer I could show that. + +WH: I understand the answer, thank you. + +DRR: Thanks. + +CDA: PFC? + +PFC: Okay. I first want to say I do find this very useful. Thank you very much for presenting it. My question was: What were your thoughts on putting the keyword after the colon so that it is clear it is part of the type or at least part of TypeScript? + +DRR: Yeah. It’s, it’s—it’s something that we actually considered. And we actually moved away from doing something like that because there was a different confusion. I think it is actually tied to the next, next question as well from Luca. But basically, it is actually not part of the type so to speak. It is TypeScript specific, but the issue is that it is, it is really more in order of how typeScript resolves a function when it reforms control flow analysis and other checking and then says are any of the parameters marked as deferred? So the problem with trying to come up with this idea it is part of the type is that it really doesn’t flow through the type system when you declare a function. Right? Within the function, the fact that it is deferred has no bearing on it. It doesn’t kind of flow in other places. So we didn’t really want to give this—this, this notion, we didn’t want people to have this notion, that hey, maybe this is reforming some sort of effect analysis or something like that in other places. It’s really, it’s really only something that is like on the order of, of call sites. Right? It’s not in the declaration site at all that we do any analysis. So we really wanted to move away from that. We actually even thought of other ways that didn’t require a new keyword. We tried to do things that kind of looked like a specific marker type. Like deferred of, and then in triangle brackets or less than or greater than, whatever time—type you had. That confused people on the team as well. So we moved away from that. + +CDA: Jordan? + +JHD: Yeah. I mean, I have lots of praise coming in my queue item. But it really seems like it is a property of the things that calls the callback, not the function itself. Because the function is not of charge of what is called. So the sort of annotation that doesn’t belong on map or for each, not on the Cronbach I pass into it. If I pass the function that calls it immediate and the place that calls it later that would have potentially different semantics then something that I only pass into one or the other. + +DRR: Yeah. I think, I think—it is hard to visualize. But I think you’re saying something similar to what I stated. Right? + +JHD: Well, I mean, I’m saying that I shouldn’t be annotating the callback. I should be annotating the aPI that receives the callback, this is indeed part of the type that accepts the callback, it has nothing to do with the function. + +DRR: So you really don’t want to have to annotate this on any use site as well. Because it is so, it would be really annoying. Right? The function— + +JHD: I’m not saying annotating it at any call, but at the boundary where it is passed in. Would that be still really annoying. + +DRR: You’re saying at the call expression you would rather it be— + +JHD: No, sorry, let me clarify. I’m saying if I make a function that accepts a callback, my, so that’s FOO. FOO and the thing that decides when the callback is called whether it is immediate or not. + +DRR: Yes. Exactly. + +JHD: The type of FOO and only that should describe how that function is used and so at the call site, you should be able to infer from the type of the API whether the function being passed in is being immediately called or not. + +DRR: Um— + +JHD: No? + +DRR: Yeah. Yeah. So we, we, we, we resolve the function and figure out how the function declared its parameter. + +JHD: Right. So you’re saying it is a property of the parameter, not the callback itself. Awesome. + +DRR: Exactly. + +JHD: Okay. We may be saying the same thing. + +DRR: Exactly. Sorry about that. + +LCA: So I have a—so first of all I want to say this is very interesting. I’m very glad you are undertaking this work. Yeah, very much a pain point. I’m also leaning towards immediate. I think maybe for some other reasons then you outlined. Namely I think there is maybe possibilities for further type checking if you, if you explicitly annotate things with immediate. I would like to know if you considered any of this. Mainly if you have a callback that is agitated with immediate,, do you think TypeScript would raise an error if you were to try to call this function inside of a deferred callback? So like could it do—sort of escape analysis and know that the immediate function has escaped the immediate body? And you are trying to call it from some where it wouldn’t be immediately, like, it wouldn’t be immediate anymore. If that made any sense. So like maybe to give it more concrete example, you have a callback and you pass this callback—and sorry, in the function body you have a promise, you call dot on this promise and in the callback to dot then, you call the function that you annotated with immediate. Like this seems like it would violate the immediate sort of thing that you’re saying that you’re going to immediately call it. Because you’re not immediately calling it, you’re calling it in the deferred function. + +DRR: Right. So I think we did consider some sort of analysis around if you’ve declared something as deferred or immediate can you guarantee that the person declaring that thing, right? The function declaring that parameter is being used correctly? And so for—if you’re taking the stance of immediate, it is actually always fine to say something is immediate, like immediately called because all that means it is possibly called and it might be called now. So consider that a possible flow of execution. And so you’re basically, kind of just taking more conservative analysis there. It’s, and it’s also, it’s also, you know, immediate is also made for a world where you’re assuming that not everything is really annotated perfectly. Right? So we, we really didn’t push on that exactly. Like we did have some ideas of oh, well, what if you, what if you had this in another callback and that is passed another thing, can you at least guarantee somewhere in your function it is called at some point or passes a parameter somewhere else? But you don’t really have a lot of guarantees, there are also sort of patterns. It gets worse if you go the other way and marking things as deferred, because deferred could happen for all sorts of different reasons. Right? It could be stashed away in an array, it could be stashed away in some sort of the event loop to be run later. Things like that. So we really didn’t feel confident we could get that analysis right. And it wasn’t clear if it was going to have a ton of value in, for the cost that it would requires as well. + +LCA: Sure. I think that makes sense. I would love to, like I would be happy to also help investigating this. For example, if it makes sense to pass immediate functions, two functions as an argument to your function that takes a deferred callback, well, not explicitly immediate. I feel like in cases like this, maybe the argument it is part of the type— + +DRR: You cut out. You’re cutting out. I’m sorry. + +LCA: Oh, I’m so sorry. I’m—I think—if you pass an immediate—okay. Then I will— + +CDA: Yeah. You’re breaking up. Sorry. Daniel, you were breaking up. But if you want to finish your thought. I will note we are pastime, though, so we should wrap this up. + +DRR: Yeah. So I guess, I’m happy to try to answer some questions either via matrix or if you want to provide, GitHub as well. That’s a possibility, too. We should have, I don’t have, I don’t have places to link to from here right now. And I also have to kind of run in the next minute or two as well. But I’m happy to continue talking either through matrix or other means if you would prefer. + +CDA: Great. Appreciate that.