X Tutup
The Wayback Machine - https://web.archive.org/web/20210414185702/https://github.com/nodejs/node/pull/36328
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Https imports #36328

Draft
wants to merge 5 commits into
base: master
from
Draft

Https imports #36328

wants to merge 5 commits into from

Conversation

@bmeck
Copy link
Member

@bmeck bmeck commented Nov 30, 2020

This allows for HTTPS (not HTTP) imports to work with the ESM module loader. It Is NOT READY. This PR is meant to allow for discussion and showing some refactoring issues going on.

Concerns
  • Can be disabled - NEEDS CONSENSUS
  • Offline cache - NEEDS CONSENSUS
    • established file location / configuration
    • ability to eject/override cache features agreed upon
    • cache semantics agreed upon
  • Integrity checks - WIP
    • policies should cover? needs tests for protocol => domain => / separator => URL
    • certificate authorities
  • Cookies - NEEDS CONSENSUS
    • should this even be allowed / same-origin meaning, agreed upon
  • Redirects - WONTFIX / banned
    • redirecting across protocols is... insane
    • they work significantly different than files, making dumping to disk break in odd ways
    • they make import.meta.url mean something very different by no longer having a 1-1 mapping for Web compat
    • meaning for policies is very complicated (need to add aliasing config to policies if supported)
  • MIME - FIXME
    • MIME parser is... still blocked? I'm not willing to go through the consensus/ anguish to move that forward; too much blood lost already to the great consensus gods and it feels like I will cry (not in a joking manner) if I try to land it again.
  • Selective HTTP support - NEEDS CONSENSUS
    • Debugging HTTPS on a local machine is painful, should have some way to do local dev.
Documentation

Due to the scope of this we need more in-depth written documents of the impacts this PR has prior to landing.

  • Security writeup [comments enabled]
    • cover basic attack vector mitigations / responsibilities
    • cover inherent flaws of mutable node core if we use builtin https module
    • cover basic auth concerns
  • Operations concerns writeup
    • cover air gaps / purely offline systems
    • cover reliability concerns
Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines
bmeck added 2 commits Nov 30, 2020
@bmeck bmeck requested review from jasnell and MylesBorins Nov 30, 2020
@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

@nodejs/modules-active-members I couldn't find a sane way to refactor the getFormat hook in this PR. We should figure out a way to make it not do 2 requests.

@vdeturckheim
Copy link
Member

@vdeturckheim vdeturckheim commented Nov 30, 2020

Interesting, would there be a way to statically know what an app uses? Right now Sqreen parses the content of the diverse node_modules and we consider it as "good enough" as a source of truth.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

@vdeturckheim due to import() using an expression, not unless there is no usage without a literal. A policy file could enforce a static codebase though.

@vdeturckheim
Copy link
Member

@vdeturckheim vdeturckheim commented Nov 30, 2020

@bmeck true, I often kind of think that import() is somehow the eval() of modules and could have a flag to be disabled but this would be a totally different topic.
I like the policy idea, it sounds pretty elegant.

@jkrems
Copy link
Contributor

@jkrems jkrems commented Nov 30, 2020

I couldn't find a sane way to refactor the getFormat hook in this PR. We should figure out a way to make it not do 2 requests.

This may a new motivation to revive #35524. I think that should fix the issue (but it's potentially disruptive).

@benjamingr
Copy link
Member

@benjamingr benjamingr commented Nov 30, 2020

I think this is really cool and it would be cool if this loader (optionally?) cached resources (according to HTTP headers, like browsers do). I think this is what you mean by "Offline cache" :]

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

@jkrems an alternative is to have a resource cache in our default loader rather than doing everything at time of request, we can drop the resource once it ref counts down to 0 or some other metric.

Copy link
Member

@mcollina mcollina left a comment

I do not agree to have this enabled by default. It’s really unsafe to do, and it is detrimental to the developer experience.

(I do not have much time to explain why it’s completely unsafe to do this, I’ll try to write a long write up later if needed).

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

@mcollina I'm not entirely opposed to flagging this (such as requiring it to be in a policy file listing), but if this leads to users consistently enabling them by default it would be good to understand why HTTPS (not HTTP) is insecure from your perspective so we can mitigate any issues.

@jasnell
Copy link
Member

@jasnell jasnell commented Nov 30, 2020

@mcollina... as this is still a draft and being actively discussed, I'm not sure the "-1"/Request Changes is helpful yet.

@bmeck ... the security concerns on this really come down to the general risk of running untrusted code downloaded over arbitrary internet connections. For instance, I could fairly easily write a script that returns one bit of javascript during development but replaces that with a malicious script when accessed from a production server, without the developer ever knowing that there's been a change. Putting this behind a flag for now while we figure out the threat model and various mitigations definitely makes sense and would be right thing to do.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

@jasnell I've stated a security model document, though due to the lack of safety in using core's HTTPS a lot will inevitably be labeled as out of scope of this feature and must be secured by other means like policies. I would note that event-stream did the same kind of replacement workflow on files locally and do not consider this attack vector novel by adding HTTPS. Node's mitigation for that attack surface is to use a policy and the same would be true here.

@mcollina
Copy link
Member

@mcollina mcollina commented Nov 30, 2020

It is insecure because there is no signature of the original file. Anybody that can take control of a domain name can take control of a server.

As a community we had some significant issues regarding security, from left-pad to event-stream. Adding this functionality would make us 10x more vulnerable to these kind of a attack because there would be no central entity that could intervene.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

@mcollina can you clarify how policies aren't that intervention/mitigation?

@mcollina
Copy link
Member

@mcollina mcollina commented Nov 30, 2020

@mcollina... as this is still a draft and being actively discussed, I'm not sure the "-1"/Request Changes is helpful yet.

The plan at the top highlight the fact that this will be enabled by default and it needs consensus to make users disable this. I do not agree with that approach, hence my comment. I'm happy to change my mind if it can be made safe to do so - however I do not think it is possible at all to make this safe.

@jasnell
Copy link
Member

@jasnell jasnell commented Nov 30, 2020

@bmeck .. for discussion.. from your checklist:

Can be disabled - NEEDS CONSENSUS

Initially I would mark this explicitly opt-in. Assuming we flip that default later, there should definitely be a way to explicitly opt-out on the process level.

Offline cache - NEEDS CONSENSUS
established file location / configuration
ability to eject/override cache features agreed upon
cache semantics agreed upon

Definitely a much larger discussion and one that we need to have with the package manager folks at the table. Not only would we need to figure out the caching semantics, we need to figure out the semantics around what happens if a require('https...') script does a require('some-local-script'). What if those things end up being different versions? I think what we need to do is take a moment to draw up a list of What Ifs and see if we can easily answer those.

Integrity checks - WIP

I'm going to sound like a broken record but as I've been saying for years now we 100% need this. Adding https imports would only make that need more pronounced.

policies should cover? needs tests for protocol => domain => / separator => URL
certificate authorities
Cookies - NEEDS CONSENSUS
should this even be allowed / same-origin meaning, agreed upon

Blocklist / Allow list of origins at the very least.
Same-origin is going to be hard. I can imagine a case where require('https...') effectively becomes its own realm, where any of the code running within is isolated in it's own context, is only permitted to use either require('{built-in}') or require('https...') within the same origin or following origin policies, etc.

Cookies are a much more difficult matter. My knee jerk reaction is that we should not support cookies.

Redirects - WONTFIX / banned
redirecting across protocols is... insane
they work significantly different than files, making dumping to disk break in odd ways
they make import.meta.url mean something very different by no longer having a 1-1 mapping for Web compat
meaning for policies is very complicated (need to add aliasing config to policies if supported)

This I'm less convinced about and need to think through more. My knee jerk reaction is that not supporting redirects in some way is very anti-web and therefore a bad thing but that's the Standards Wonk side of me speaking. Will stew on this one.

MIME - FIXME
MIME parser is... still blocked? I'm not willing to go through the consensus/ anguish to move that forward; too much blood lost already to the great consensus gods and it feels like I will cry (not in a joking manner) if I try to land it again.

I think I may be able to take this over for you but it would realistically be a 2021 project.

Selective HTTP support - NEEDS CONSENSUS

I think we can safely just say no to http modules.

Debugging HTTPS on a local machine is painful, should have some way to do local dev.

This mechanism should allow for locally aliasing of https modules anyway, making it so that even if require('https...') is used, if there is a local alias established for it, the local one is used. That should address at least the immediate issue. However, the point here still stands: we'll need to provide observability into what is happening during load. We do have keylog support already built in to core. We'll want to make sure we have command line options to allow keylogging for Node.js' own module loader connections.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

@mcollina we need to understand why you consider it unsafe given current features and mitigations for these workflows that already exist. I don't think a blanket statement that it is unsafe and implication that it cannot be safe is helpful.

@jasnell
Copy link
Member

@jasnell jasnell commented Nov 30, 2020

@bmeck ... I'm going to take a bit more time to dig through your write ups and code and think it through before responding much more as I don't want to just churn the conversation here. Hopefully others will do the same. I'll definitely take on the MIME module work tho if that would make things easier for you.

@mcollina
Copy link
Member

@mcollina mcollina commented Nov 30, 2020

The attack vector is the same of left-pad and event-stream. There are plenty of write ups around on what happened. The mitigations that npm put in place are not feasible without a central/federated authority or strong cryptography.

I don't have time right now for a long write-up, I'll try to get back as soon as I can.


What would make this safe is giving authors a way to cryptographically sign the published files, and let developers validate those files against said keys that are fetched off-band.


Overall I would recommend developing this as a module/loader and then propose its addition to Node.js.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

What would make this safe is giving authors a way to cryptographically sign the published files, and let developers validate those files against said keys that are fetched off-band.

I don't understand this. I am vehemently against code signing on the developer if it is done improperly and have met over the years with a few certificate authorities about the complexity of doing this. This reduces down to the same workflow as TLS where you pull the keys off the trusted CA which is out of band by nature then verify. What advantage do we get here? Even if they upload a personal key such as using PGP the revocation mechanism and hijacking or repurposing under a different key are all possible.

Overall I would recommend developing this as a module/loader and then propose its addition to Node.js.

What is "this"? I'd note that a variety of the mitigations are not possible in user land loaders (such as policy integration) and loader have repeatedly been stalled out by Node's process so I doubt they will move forward any time soon. I spent a lot of time on loaders and trying to move them forward and don't think they are worth the push back from the consensus model we use in Node core.

@wperron
Copy link

@wperron wperron commented Nov 30, 2020

Hey all, so I brought up some points recently on Twitter related to this proposal, so I thought I'd re-articulate them here. Keep in mind I've never contributed to Node.js, this is just my opinion as a long-time user and a contributor to Deno.

TL;DR:

Importing from npm isn't inherently more secure than importing from a raw URL, so let's work to make the whole ecosystem more secure, regardless of how packages are imported.

URL Imports Aren't Inherently Less Secure

First I want to address the biggest concern that always comes up when talking about URL imports: "It's insecure." The argument is usually summed up similarly to what @jasnell said earlier in this thread: "I could fairly easily write a script that returns one bit of javascript during development but replaces that with a malicious script when accessed from a production server, [...]"

It's true, but it implies that the current way of importing is somehow more secure, and that it is more secure by virtue of not explicitely using a URL. However, packages hosted on npm are vulnerable to this problem as well. Not even a month ago, there was another report of malicious code being distributed in an npm package.

We've become accustomed to assume that central registries are somehow more secure, but it's simply not the case. Packages are still fetched from a remote source and malicious code can still make its way into npm. Once we accept that reality, URL imports become much less scary (or central registries become much more scary, depending on how you look at it).

The concerns raised here are all very valid; We just have to remember that they are issues that also affect the current import model, and so I don't it's fair to block this proposal on the grounds that it's "less secure".

I would even argue that URL imports have the potential to be more secure than traditional imports through npm, because it leverages two important aspects of the web: Authority and Visibility. It gives end-users the possibility to verify that they are imported from a trusted domain name and that that domain name has a valid SSL certificates that verifies that it is indeed that domain. It also allows to verify the whois information associated with that domain, potentially being much more explicit about when a domain or a package changes ownership. (As an aside: wouldn't you rather import express directly from https://expressjs.org than https://npmjs.org ?)

Those are all things that end users can implement themselves and include in their CI/CD pipelines and compliance tools. They've been the backbone of the internet for a long time. Browsers trust them. Users trust them. Why couldn't we?


Feedback On The Concerns Above

The other part of @jasnell comment - "[...] without the developer ever knowing that there's been a change." is very important; I think if URL imports are to go forward, they should be restricted to tagged versions, no "latest" allowed. deno.land/x uses the @ symbol to identify versions. While we don't support semver type versioning like package.json does, it's probably possible to implement in order to support the same version limiting already available in Node.

Local caching and interop with require("some-local-package") are also two good points. Now, this might just be me being naive about the complexity of the node_modules system, but we could simply cache url-imported modules in node_modules: It would make the old style of imports compatible with url imports and solve the issue of the local cache. Sure it wouldn't be compliant with http cache headers, but then again, node_modules already isn't and that's never been a problem in the past. In fact, I think there's a larger argument to be made here that being compliant with http cache headers don't make much sense on the server side.

Similarly, I agree with @jasnell with regards to the cookies support; I don't think it makes much sense here.

I'm less certain about redirects, they can be helpful for resolving versions when using tags like ^1.6 or something. Maybe restrict them to the same origin?

I don't see a reason to not include an allow/block function for specific origins, though I think we should also add a recommendation in the docs for operators to include those in firewall rules (hey if we're gonna take advantage of web mechanisms, let's go all the way!)

Now for a big one: integrity checks. Honestly, I don't know how much we can do there. Ultimately we can include checksum verification to make sure that the content the user receives is the same one the server said it was sending and that it wasn't tempered with. Beyond that, I don't there's much else that can be done. As I said earlier, it's already possible for legitimate users to upload malicious code to a legitimate platform. The best anyone can do here is certify that the content wasn't messed with by a man-in-the-middle.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

I'm less certain about redirects, they can be helpful for resolving versions when using tags like ^1.6 or something. Maybe restrict them to the same origin?

They can be but the way the web is specified leads to issues if you do. Let us imagine:

  1. a function resolve = import.meta.resolve that resolves URLs for importing purposes.
  2. a url /react that redirects to /react/1.6

On the web await import(resolve('/react')) !== await import('/react') because one is cached at /react and the other at /react/1.6. This problem is so bad in ESM when it was possible to cause this via having both ESM and CJS that we have a whole docs section on it for Node https://nodejs.org/dist/latest/docs/api/packages.html#packages_dual_package_hazard

@bmeck
Copy link
Member Author

@bmeck bmeck commented Nov 30, 2020

Per integrity checks that many people keep bringing up please read https://nodejs.org/dist/latest/docs/api/policy.html as they do already cover most of these concerns. However, like @wperron is bringing up, they assert integrity not if the resource is hostile or not.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 1, 2020

@ExE-Boss that isn't proposed because it can't handle cycles; use a policy in node or <link> in browser.

@ExE-Boss
Copy link
Contributor

@ExE-Boss ExE-Boss commented Dec 1, 2020

@bmeck But a local file:// module can’t have a cyclic dependency with a remote module.


In this case, I see it more like:

<script type="module" src="https://example.org/foo.js" integrity="sha512-..."></script>
@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 1, 2020

@ExE-Boss why, are you talking about implementing CORS and the no cross origin except HTTPS limitation from fetch? Also, the problem is with the attribute design not asserting integrity.

@jasnell
Copy link
Member

@jasnell jasnell commented Dec 1, 2020

@GeoffreyBooth:

...but as the Deno example proves, they can be addressed

I'd be far more cautious here. Deno is nice but it's still very young and any security mitigations it may have implemented are far from battle tested. Anecdotally it is a good story but we should be careful with claims that is has "addressed" all of the concerns.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 1, 2020

Anecdotally it is a good story but we should be careful with claims that is has "addressed" all of the concerns.

Are there docs on its mitigations anywhere?

@jsumners
Copy link
Contributor

@jsumners jsumners commented Dec 1, 2020

Regarding the point that central registries are just as big of a nightmare for security as random URLs:

That may be true, but at least with a central registry you have a higher level of management, mitigation, and various other things. Whereas with https://repo.example.com/my-cool-module.js, repo can go away, example.com could transfer ownership, and my-cool-module.js could simply 404. Even if example.com doesn't change ownership, it is still unreliable because people reorganize their sites from time to time and rarely, unless they are spec pedant like myself, offer up 301s.

@benjamingr
Copy link
Member

@benjamingr benjamingr commented Dec 1, 2020

@MylesBorins had a nice idea and writeup in WICG about out-of-band integrity checks IIRC

@ExE-Boss in-band integrity checks (in the module part as part of the metadata) are very problematic because a single module can mess up with the integrity of the whole system very easily.

@benjamingr
Copy link
Member

@benjamingr benjamingr commented Dec 1, 2020

Kind of playing devil's advocate here but:

Regarding the point that central registries are just as big of a nightmare for security as random URLs:

That's great and why we need policies/lock-files, the other side of the coin is that in NPM/central registries you don't bundle dependents with the code - so you can depend on module A which can depend on module B. You have no control on the publisher of module B.

Note that the NPM registry is public and we have no control on who publishes what to NPM - so stuff like "the domain changing" or "the file 404ing" is just translated to "the user of a dependency of your dependency publishes a patch version with malicious code".


Also fwiw, we can 100% enforce thees things (have the cache automatically convert to a lock file that automatically asserts the domain certificate didn't transfer etc.).

@benjamingr
Copy link
Member

@benjamingr benjamingr commented Dec 1, 2020

Also, as one of the few people actually writing Deno code here (I think?) - Deno's modules don't really work very well (yet?) in certain cases. I think this feature is really nice for prototyping and writing code with different guarantees and is still a net positive. This is very easy to do without URL imports as others have mentioned with import a data URL + an HTTP request.

I am happy to loop-in the Deno people (which btw can be all found in the #dev channel in the Deno discord) and ask them for feedback and what doesn't work well yet or for advice. They're nice people and will likely help us.

@ljharb
Copy link
Member

@ljharb ljharb commented Dec 1, 2020

@benjamingr it seems like for prototyping, the ability to have https imports inside node_modules isn't important, only in the top-level app. thoughts?

@benjamingr
Copy link
Member

@benjamingr benjamingr commented Dec 7, 2020

@ljharb sorry, missed that - deno has no concept of "node_modules" - dependencies can't have dependencies, if your dependency needs to have dependencies it needs to bundle them (with deno bundle hopefully or unpkg or whatever).

Edit: See discussion below, as William has pointed out this is in fact possible in Deno and my previous assertion was false, I just happened to never have used it in my own code.

So for me top level is enough but it's definitely used in a nested way in Deno.

As explained above integrity checking is done through the lock file https://deno.land/manual/linking_to_external_code/integrity_checking.

@wperron
Copy link

@wperron wperron commented Dec 7, 2020

@benjamingr Deno can definitely have a deep dependency graph, and dependencies can have their own dependencies and so on and so forth. The whole graph is downloaded and cached locally in the global cache folder only once before running, and will run off the cache in subsequent runs, much like npm install works. Just wanted to clarify that.

@benjamingr
Copy link
Member

@benjamingr benjamingr commented Dec 7, 2020

@wperron I guess that's possible, I've always written do using:

  • Either local imports (which of course can have their own imports and be nested).
  • Remote imports that are bundled files.

I never considered remote imports downloading their own remote imports and not being bundled, so much that after using it for a while until I read your message I didn't even realize that works (though I believe you it does, why wouldn't it? It's just another module and those can load modules from the web).

Have you seen deno code bases that do this with https imports (load something from https and that loads other things from https instead of bundling them)?

@wperron
Copy link

@wperron wperron commented Dec 7, 2020

@benjamingr I don't want to hog the conversation with Deno-specifics, but just as an example the std lib in deno has interdependencies with itself in a couple of places, the server framework Oak also imports directly from a bunch of urls which have their own dependencies, just try deno info https://deno.land/x/oak/mod.ts and you'll get the whole tree

@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 7, 2020

Just to clarify, I think having a lot of voice about what Deno does/does not guarantee is going to be a large part of this conversation and so a threaded format might be better. I've made a discussion in the hopes that we get a better interaction experience since there are many topics going on here : #36430

@mcollina mcollina self-requested a review Dec 10, 2020
@mcollina
Copy link
Member

@mcollina mcollina commented Dec 10, 2020

I've thought a lot about this in the last week or so, and I think we should do it. Given a few things that have been mentioned above:

  • tight permission system (no wildcards by default)
  • no https import from node_modules sources
  • no node_modules import from https sources
  • "recording" mode, with clear notices that downloading and not committing the sources will create problems. This also adds integrity checks.
@ljharb
Copy link
Member

@ljharb ljharb commented Dec 10, 2020

presumably also, no filesystem imports of any kind from https sources?

@mcollina
Copy link
Member

@mcollina mcollina commented Dec 10, 2020

presumably also, no filesystem imports of any kind from https sources?

Yes, exactly.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 10, 2020

In general cross origin fetching is well limited https://fetch.spec.whatwg.org/#main-fetch by fetch behavior already, even if we don't follow browser spec, the compatibility there is ideal.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 10, 2020

@mcollina

tight permission system (no wildcards by default)

Not sure what this means.

no https import from node_modules sources

Application => node_modules boundaries isn't currently super clear, I think that would need to be ironed out a bit more. Doing this creates a privileged scope that would need a model.

I also don't think this is a very valuable thing to have given other discussion above. Claims of needing to handle reliability are fully in the application runners hands and can be mitigated as above. The application scope does not have a special and different means of mitigating the same issues.

no node_modules import from https sources

In general cross origin stuff needs to be done carefully. Web bans http{,s}: => non-http{,s}: fetches except for blob:/data: of same origin. I think in general cross origin fetching is likely what is really being talked about here just this PR bringing up a small subset focused on http{,s}:

"recording" mode, with clear notices that downloading and not committing the sources will create problems. This also adds integrity checks.

This integrity recording is generally an issue not unique to HTTPS. Proven by a decent number of npm incidents being affected by modifying files on disk or downloading new files.

@benjamingr
Copy link
Member

@benjamingr benjamingr commented Dec 10, 2020

So just to be clear - everyone is mostly in favour of doing this and we're trying to figure out how to do this safely and securely?

@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 10, 2020

@benjamingr I think there is also the question of if implementing those measures to secure / robustly have these is worth the complexity.

@ljharb
Copy link
Member

@ljharb ljharb commented Dec 11, 2020

@benjamingr personally i don’t think this is a good feature whatsoever and i don’t think it should be possible in node core, but i don’t have standing to block nor would i merely based on my personal opinion. if we do land it, i hope we find ways to avoid footguns and make it safe by default, even it that adds ergonomic friction.

@bmeck
Copy link
Member Author

@bmeck bmeck commented Dec 11, 2020

I don't think ergonomic friction with reasons is something to be avoided, but we need to be very clear about what we gain from having things. If the ergonomic friction exists for ergonomic friction that isn't serving our users. Preventing accidents can lead to a similar situation sure, but preventing usage at all seems to be a desire in some comments above.

@jimmywarting
Copy link

@jimmywarting jimmywarting commented Feb 27, 2021

presumably also, no filesystem imports of any kind from https sources?

That is one thing i like about deno modules, that they are sandboxed and require permission for using fs and net so those third party packages is somewhat safe and you need to give it access. kind of how the browser permission api is for the web (but for deno modules) I wish all node/npm modules where like this as well. if i want to include say lodash then i would never ever want it to have anything to do with http(s), sockets, net, fs or process.exit() what so ever.

I wish i could disable http, fs and process for any third party moduls in node. and give only expressjs access to http on a per module bases

one thing i dislike about npm modules doe are post install that runs code code on your machine. I think that can be somewhat scary.


I also wish browser followed suit and, forbid eg jquery that have been loaded from a CDN to not be allowed to use xhr, fetch, indexeddb, localstorage. it should only have access to the DOM.
I think Realms can be a cool feature looking forward to

@bmeck
Copy link
Member Author

@bmeck bmeck commented Feb 27, 2021

@jimmywarting please see https://nodejs.org/api/policy.html for such censoring at a package level for now in Node https://github.com/bmeck/local-fs-https-imports uses that to ensure the local copies of HTTPS resources can't access things per the discussion of this PR. That discussion is likely where you should state any specific concerns about defaults for HTTPS : #36430

@jimmywarting

This comment was marked as off-topic.

@guybedford guybedford force-pushed the nodejs:master branch from dc5a5da to 8e46568 Mar 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

X Tutup