This commit is contained in:
Florian Federspiel
2023-11-25 16:53:52 +01:00
commit 677030f712
685 changed files with 148719 additions and 0 deletions

153
test/imaps/node_modules/mailsplit/LICENSE.EUPL-1.2 generated vendored Normal file
View File

@@ -0,0 +1,153 @@
EUROOPA LIIDU TARKVARA VABA KASUTUSE LITSENTS v. 1.2
EUPL © Euroopa Liit 2007, 2016
Euroopa Liidu tarkvara vaba kasutuse litsents („EUPL“) kehtib allpool määratletud teose suhtes, mida levitatakse vastavalt käesoleva litsentsi tingimustele. Teost on keelatud kasutada muul viisil kui vastavalt käesoleva litsentsi tingimustele niivõrd, kuivõrd sellise kasutamise suhtes kehtivad teose autoriõiguse omaja õigused).
Teost levitatakse vastavalt käesoleva litsentsi tingimustele, kui allpool määratletud litsentsiandja paneb vahetult teose autoriõiguse märke järele järgmise märke:
Litsentsitud EUPL alusel
või on muul viisil väljendanud soovi litsentsida originaalteos EUPL alusel.
1. Mõisted
Käesolevas litsentsis kasutatakse järgmisi mõisteid:
— „litsents“ käesolev litsents;
— „originaalteos“ teos või tarkvara, mida litsentsiandja käesoleva litsentsi alusel levitab või edastab ning mis on kättesaadav lähtekoodina ja teatavatel juhtudel ka täitmiskoodina;
— „tuletatud teos“ teos või tarkvara, mida litsentsisaajal on olnud võimalik luua tuginedes originaalteosele või selle muudetud versioonile. Käesolevas litsentsis ei määratleta muudatuse ulatust või sõltuvust originaalteosest, mis on vajalik selleks, et klassifitseerida teos tuletatud teoseks; selline ulatus määratakse kindlaks vastavalt artiklis 15 nimetatud riigis kehtivatele autoriõigust käsitlevatele õigusaktidele;
— „teos“ originaalteos või sellest tuletatud teosed;
— „lähtekood“ teose inimloetav vorm, mida inimestel on kõige lihtsam uurida ja muuta;
— „täitmiskood“ igasugune kood, mille on tavaliselt koostanud arvuti ja mis on mõeldud arvuti poolt programmina tõlgendamiseks;
— „litsentsiandja“ füüsiline või juriidiline isik, kes levitab või edastab teost litsentsi alusel;
— „edasiarendaja“ füüsiline või juriidiline isik, kes muudab teost litsentsi alusel või osaleb muul moel tuletatud teose loomises;
— „litsentsisaaja“ või „Teie“ iga füüsiline või juriidiline isik, kes kasutab teost ükskõik mis moel vastavalt litsentsi tingimustele;
— „levitamine“ või „edastamine“ teose koopiate müümine, teisele isikule andmine, laenutamine, rentimine, levitamine, edastamine, ülekandmine või teose koopiate või selle oluliste funktsioonide muul viisil teistele füüsilistele või juriidilistele isikutele kättesaadavaks tegemine võrgus või võrguväliselt.
2. Litsentsiga antavate õiguste ulatus
Käesolevaga annab litsentsiandja Teile ülemaailmse, tasuta, all-litsentsi andmise õigusega lihtlitsentsi, mille alusel võite teha originaalteose autoriõiguse kehtivusaja jooksul järgmist:
— kasutada teost mis tahes eesmärgil ja mis tahes viisil,
— teost reprodutseerida,
— teost muuta ja luua teosel põhinevaid tuletatud teoseid,
— teost või selle koopiaid üldsusele edastada, sealhulgas neid kättesaadavaks teha või eksponeerida, samuti avalikult esitada,
— teost või selle koopiaid levitada,
— teost või selle koopiaid laenutada ja rentida,
— anda all-litsentse teose või selle koopiate suhtes kehtivate õiguste kohta.
Neid õigusi võib teostada mistahes praegu tuntud või hiljem leiutatud keskkonnas, toel või formaadis, ja sellises ulatuses, nagu lubab kohaldatav õigus.
Riikides, kus kehtivad isiklikud õigused, loobub litsentsiandja seadusega lubatud ulatuses oma õigusest teostada isiklikke õigusi, et oleks võimalik eespool loetletud varalisi õigusi litsentsida.
Litsentsiandja annab litsentsisaajale tasuta lihtlitsentsi kõigi litsentsiandjale kuuluvate patentide kasutamiseks ulatuses, mis on vajalik teose suhtes käesoleva litsentsiga antud õiguste kasutamiseks.
3. Lähtekoodi edastamine
Litsentsiandja võib teose kättesaadavaks teha kas lähtekoodi või täitmiskoodi kujul. Kui teos tehakse kättesaadavaks täitmiskoodi kujul, lisab litsentsiandja igale tema poolt levitatavale teose koopiale lähtekoodi masinloetava koopia või näitab teose autoriõiguse märke järele lisatud märkega ära hoidla, kust lähtekood on kergesti ja tasuta kättesaadav seni, kuni litsentsiandja jätkab teose levitamist või edastamist.
4. Autoriõiguse piiramine
Käesolev litsents ei võta litsentsisaajalt võimalusi, mis tulenevad teose õiguste omaja ainuõiguste suhtes kehtestatud eranditest või ainuõiguste piiramisest, nende õiguste ammendumisest või muudest nende õiguste suhtes kohaldatavatest piirangutest.
5. Litsentsisaaja kohustused
Eespool nimetatud õigused antakse litsentsisaajale tingimusel, et ta peab kinni teatavatest piirangutest ja täidab teatavaid talle pandud kohustusi. Need kohustused on järgmised:
Õigus autorsusele. Litsentsisaaja hoiab puutumatuna kõik autoriõiguse, patentide ja kaubamärkide kohta käivad märked ja kõik märked, mis viitavad litsentsile ja garantii puudumisele. Litsentsisaaja peab teose iga tema poolt levitatavale või edastatavale koopiale lisama nimetatud märgete koopiad ja litsentsi koopia. Litsentsisaaja peab tagama, et iga tuletatud teos kannab selget märget selle kohta, et teost on muudetud ja muutmise kuupäeva.
Klausel vaba kasutamise tagamise kohta (copyleft). Kui litsentsisaaja levitab või edastab originaalteose või tuletatud teoste koopiaid, toimub see levitamine või edastamine vastavalt käesoleva litsentsi tingimustele või käesoleva litsentsi hilisema versiooni tingimustele, välja arvatud juhul, kui originaalteost levitatakse ainult vastavalt litsentsi käesolevale versioonile (näiteks märkides „ainult EUPL v.1.2“). Litsentsisaaja ei tohi (litsentsiandjaks saades) teosele või tuletatud teosele lisada ega kehtestada mingeid lisatingimusi, mis muudavad või piiravad litsentsi tingimusi.
Ühilduvuse klausel. Kui litsentsisaaja levitab või edastab tuletatud teoseid või nende koopiaid, mis põhinevad nii teosel kui ka teisel, käesoleva litsentsiga ühilduva litsentsi alusel litsentsitud teosel, võib see levitamine või edastamine toimuda vastavalt nimetatud ühilduva litsentsi tingimustele. Käesolevas klauslis tähendab „ühilduv litsents“ käesoleva litsentsi lisas loetletud litsentse. Kui litsentsisaaja ühilduvale litsentsile vastavad kohustused on vastuolus tema kohustustega vastavalt käesolevale litsentsile, loetakse kehtivaks ühilduva litsentsi kohased kohustused.
Lähtekoodi lisamine. Teose koopiate levitamisel või edastamisel lisab litsentsisaaja igale koopiale lähtekoodi masinloetava koopia või näitab ära hoidla, kust lähtekood on kergesti ja tasuta kättesaadav seni, kuni litsentsisaaja jätkab teose levitamist või edastamist.
Õiguskaitse. Käesoleva litsentsiga ei anta luba kasutada litsentsiandja ärinimesid, kaubamärke, teenindusmärke või nimesid, välja arvatud juhul, kui see on vajalik mõistlikuks ja tavapäraseks kasutamiseks teose päritolu kirjeldamisel ja autoriõiguse märke sisu reprodutseerimisel.
6. Autorsuse ahel
Esialgne litsentsiandja tagab, et käesoleva litsentsiga antav autoriõigus originaalteosele kuulub talle või on talle litsentsitud ja et tal on õigus seda litsentsiga edasi anda.
Iga edasiarendaja tagab, et autoriõigus tema poolt teosele tehtavatele muudatustele kuulub talle või on talle litsentsitud ja et tal on õigus litsentsi anda.
Iga kord, kui Te võtate vastu litsentsi, annavad esialgne litsentsiandja ja hilisemad edasiarendajad Teile litsentsi nende osaluse kasutamiseks teoses vastavalt käesoleva litsentsi tingimustele.
7. Garantii puudumine
Teos on veel pooleli ja arvukad edasiarendajad parendavad seda järjepidevalt. See ei ole lõpetatud teos ja võib seetõttu sisaldada defekte ja programmivigu, mis on omased seda liiki arendustegevusele.
Seetõttu levitatakse teost litsentsi alusel „sellisena, nagu see on“ ilma teose suhtes kehtiva garantiita, muu hulgas garantiita kaubandusliku kvaliteedi kohta, garantiita sobivuse kohta mingi kindla eesmärgi jaoks, garantiita defektide ja vigade puudumise kohta, garantiita täpsuse kohta ja selle kohta, et ei ole rikutud muid intellektuaalse omandi õigusi peale käesoleva litsentsi artiklis 6 nimetatud autoriõiguse.
Käesolev garantii puudumise klausel on litsentsi oluline osa ja teosele õiguste andmise eeltingimus.
8. Vastutuse välistamine
Välja arvatud tahtliku õiguserikkumise või füüsilistele isikutele tekitatud otsese kahju puhul, ei vastuta litsentsiandja mitte mingil juhul litsentsi või teose kasutamise tagajärjel tekkinud mistahes otsese või kaudse, varalise või moraalse kahju eest, muu hulgas maineväärtuse langusest tekkinud kahju, tööseisakute, arvutirikke ja talitlushäirete, andmete kadumise ja ärikahju eest, isegi juhul kui litsentsiandjat on sellise kahju tekkimise võimalikkusest teavitatud. Litsentsiandja vastutab siiski vastavalt tootevastutust käsitlevatele õigusaktidele niivõrd, kuivõrd need õigusaktid on teose suhtes kohaldatavad.
9. Lisakokkulepped
Teose levitamisel võite Te sõlmida lisakokkuleppe, milles määratakse kindlaks käesoleva litsentsiga vastavuses olevad kohustused või teenused. Kuid kui olete nõus võtma kohustusi, võite Te tegutseda ainult iseenda nimel ja vastutusel, mitte esialgse litsentsiandja või teiste edasiarendajate nimel, ja ainult juhul, kui Te nõustute vabastama edasiarendajad vastutusest, kaitsma ja hoidma neid kahju tekkimise eest vastutuse või nõuete osas, mida nende vastu võidakse esitada selle tagajärjel, et Teie pakute garantiid või võtate lisavastutuse.
10. Litsentsiga nõustumine
Käesoleva litsentsi sätetega saab nõustuda, klõpsates ikoonile „Nõustun“ litsentsi teksti näitava akna all või väljendades nõusolekut muul sarnasel viisil vastavalt kehtivatele õigusaktidele. Sellele ikoonile klõpsates väljendate selgelt ja pöördumatult oma nõusolekut käesoleva litsentsi ja kõigi selle tingimuste suhtes.
Samuti nõustute pöördumatult käesoleva litsentsi ja kõigi selle tingimustega, kui teostate Teile käesoleva litsentsi artikliga 2 antud õigusi, näiteks kasutate teost, loote tuletatud teose või levitate või edastate teost või selle koopiaid.
11. Üldsuse teavitamine
Juhul, kui Te kasutate teose levitamiseks või edastamiseks elektroonilisi sidevahendeid (näiteks võimaldate teose allalaadimist veebisaidilt), tuleb levitamiskanalil või andmekandjal (näiteks veebisaidil) teha üldsusele kättesaadavaks vähemalt kohaldatava õiguse alusel kohustuslik teave litsentsiandja ja litsentsi kohta ning selle kohta, kuidas see on litsentsisaajale kättesaadav, litsentsilepingu sõlmimise kohta ja selle kohta, kuidas litsentsisaaja saab litsentsi säilitada ja reprodutseerida.
12. Litsentsi lõppemine
Litsents ja sellega antud õigused lõppevad automaatselt juhul, kui litsentsisaaja rikub litsentsi tingimusi.
Sellise lõppemise korral ei lõpe ühegi sellise isiku litsents, kes sai teose litsentsisaajalt vastavalt litsentsi tingimustele, juhul kui see isik täidab jätkuvalt litsentsi tingimusi.
13. Muud sätted
Ilma et see piiraks litsentsi artikli 9 kohaldamist, sisaldab litsents kogu kokkulepet, mis osapoolte vahel seose teosega on.
Kui mõni litsentsi säte on vastavalt kohaldatavatele õigusaktidele kehtetu või seda ei ole võimalik jõustada, ei mõjuta see litsentsi kui terviku kehtivust ega jõustatavust. Sellist sätet tõlgendatakse või muudetakse nii nagu vaja, et see kehtiks ja saaks tagada selle täitmise.
Euroopa Komisjon võib avaldada käesoleva litsentsi muid keeleversioone või uusi versioone või lisa uuendatud versioone, kuivõrd see on vajalik ja mõistlik, vähendamata sellega litsentsiga antavate õiguste ulatust. Litsentsi uued versioonid avaldatakse kordumatu versiooninumbriga.
Käesoleva litsentsi kõik keeleversioonid, mille Euroopa Komisjon on heaks kiitnud, on võrdväärsed. Osapooled võivad kasutada enda valitud keeleversiooni.
14. Kohtualluvus
Ilma et see piiraks konkreetse osapooltevahelise kokkuleppe kohaldamist,
— kuuluvad käesoleva litsentsi tõlgendamisest tulenevad kohtuvaidlused Euroopa Liidu institutsioonide, organite, büroode või asutuste kui litsentsiandja ja mistahes litsentsisaaja vahel Euroopa Liidu Kohtu pädevusse, nagu on sätestatud Euroopa Liidu toimimise lepingu artiklis 272,
— kuuluvad käesoleva litsentsi tõlgendamisest tulenevad kohtuvaidlused muude osapoolte vahel selle pädeva kohtu ainupädevusse, kelle tööpiirkonnas asub litsentsiandja elukoht või peamine tegevuskoht.
15. Kohaldatav õigus
Ilma et see piiraks konkreetse osapooltevahelise kokkuleppe kohaldamist,
— kohaldatakse käesoleva litsentsi suhtes selle Euroopa Liidu liikmesriigi õigust, kus paikneb litsentsiandja peakorter, elukoht või asukoht,
— kohaldatakse litsentsi suhtes Belgia õigust, kui litsentsiandja peakorter, elukoht või asukoht asub väljaspool Euroopa Liidu liikmesriike.
Liide
Euroopa Liidu tarkvara vaba kasutuse litsentsi artiklis 5 osutatud „ühilduvad litsentsid“ on järgmised:
— GNU General Public License (GPL) v. 2, v. 3
— GNU Affero General Public License (AGPL) v. 3
— Open Software License (OSL) v. 2.1, v. 3.0
— Eclipse Public License (EPL) v. 1.0
— CeCILL v. 2.0, v. 2.1
— Mozilla Public Licence (MPL) v. 2
— GNU Lesser General Public Licence (LGPL) v. 2.1, v. 3
— Creative Commons Attribution-ShareAlike v. 3.0 Unported (CC BY-SA 3.0) muude teoste puhul peale tarkvara
— European Union Public Licence (EUPL) v. 1.1, v. 1.2
— Québec Free and Open-Source Licence Reciprocity (LiLiQ-R) or Strong Reciprocity (LiLiQ-R+)
Euroopa Komisjon võib käesolevat lisa ajakohastada loetletud litsentside hilisemate versioonidega ilma, et peaks selleks koostama EUPLi uue versiooni, eeldusel et nende litsentsidega tagatakse käesoleva litsentsi artiklis 2 sätestatud õigused ja kaitstakse hõlmatud lähtekoodi eksklusiivse omastamise eest.
Kõigi muude käesoleva liite muudatuste või täienduste jaoks on vaja koostada EUPLi uus versioon.

16
test/imaps/node_modules/mailsplit/LICENSE.MIT generated vendored Normal file
View File

@@ -0,0 +1,16 @@
Copyright (c) 2011-2019 Andris Reinman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

212
test/imaps/node_modules/mailsplit/README.md generated vendored Normal file
View File

@@ -0,0 +1,212 @@
# mailsplit
Split an email message stream into structured parts and join these parts back into an email message stream. If you do not modify the parsed data then the rebuilt message should be an exact copy of the original.
This is useful if you want to modify some specific parts of an email, for example to add tracking images or unsubscribe links to the HTML part of the message without changing any other parts of the email.
Supports both <CR><LF> and <LF> (or mixed) line endings. Embedded rfc822 messages are also parsed, in this case you would get two sequential 'node' objects with no 'data' or 'body' in between (first 'node' is for the container node and second for the root node of the embedded message).
In general this module is a primitive for building e-mail parsers/handlers like [mailparser](https://www.npmjs.com/package/mailparser). Alternatively you could use it to parse other MIME-like structures, for example _mbox_ files or multipart/form-data uploads.
See [rewrite-html.js](examples/rewrite-html.js) for an usage example where HTML content is modified on the fly (example script adds a link to every _text/html_ node)
## Usage
### Install
Install from [npm](https://www.npmjs.com/package/mailsplit)
npm install mailsplit --save
### Split message stream
`Splitter` is a transformable stream where input is a byte stream and output is an object stream.
```javascript
let Splitter = require('mailsplit').Splitter;
let splitter = new Splitter(options);
```
Where
- **options** is an optional options object
- **options.ignoreEmbedded** (boolean, defaults to false) if true then treat message/rfc822 node as normal leaf node and do not try to parse it
- **options.maxHeadSize** (number, defaults to Infinity) limits message header size in bytes
#### Events
**'data'** event emits the next parsed object from the message stream.
#### Data objects
- **type**
- `'node'` means that we reached the next mime node and the previous one is completely processed
- `'data'` provides us multipart body parts, including boundaries. This data is not directly related to any specific multipart node, basically it includes everything between the end of one normal node and the header of next node
- `'body'` provides us next chunk for the last seen `'node'` element
- **value** is a buffer value for `'body'` and `'data'` parts
- **getDecoder()** is a function that returns a stream object you can use to decode node contents. Write data from 'body' to decoder and read decoded Buffer value out from it
- **getEncoder()** is a function that returns a stream object you can use to encode node contents. Write buffer data to encoder and read encoded object value out that you can pass to a Joiner
Element with type `'node'` has a bunch of header related methods and properties, see [below](#manipulating-headers).
**Example**
```javascript
let Splitter = require('mailsplit').Splitter;
let splitter = new Splitter();
// handle parsed data
splitter.on('data', data => {
switch (data.type) {
case 'node':
// node header block
process.stdout.write(data.getHeaders());
break;
case 'data':
// multipart message structure
// this is not related to any specific 'node' block as it includes
// everything between the end of some node body and between the next header
process.stdout.write(data.value);
break;
case 'body':
// Leaf element body. Includes the body for the last 'node' block. You might
// have several 'body' calls for a single 'node' block
process.stdout.write(data.value);
break;
}
});
// send data to the parser
someMessagStream.pipe(splitter);
```
### Manipulating headers
If the data object has `type='node'` then you can modify headers for that node (headers can be modified until the data object is passed over to a `Joiner`)
- **node.getHeaders()** returns a Buffer value with generated headers. If you have not modified the headers object in any way then you should get the exact copy of the original. In case you have done something (for example removed a key, or added a new header key), then all linebreaks are forced to <CR><LF> even if the original headers used just <LF>
- **node.setContentType(contentType)** sets or updates mime type for the node
- **node.setCharset(charset)** sets or updates character set in the Content-Type header
- **node.setFilename(filename)** sets or updates filename in the Content-Disposition header (unicode allowed)
You can manipulate specific header keys as well using the `headers` object
- **node.headers.get(key)** returns an array of strings with all header rows for the selected key (these are full header lines, so key name is part of the row string, eg `["Subject: This is subject line"]`)
- **node.headers.getFirst(key)** returns string value of the specified header key (eg `"This is subject line"`)
- **node.headers.hasHeader(key)** returns boolean value if the specified header key exists
- **node.headers.add(key, value [,index])** adds a new header value to the specified index or to the top of the header block if index is not specified
- **node.headers.update(key, value, [,relativeKeyIndex])** replaces a header value to the specified relative key index (note that relative key index means relative to the same header key, eg if multiple exist and you specify `1` as the value, then it will update the second) or if no relative key index is specified, then it will remove all header value matches found for the key and append one at the last matching key index found with the specified value. If a relative key index is specified and it does not exist then it will be replaced (eg if there are two headers of `X-Foo-Bar` and you pass `2`, meaning it will update the third, no updates will be made since the third did not exist)
- **node.headers.remove(key)** remove header value
- **node.headers.mbox** If this is a MBOX formatted message then this value holds the prefix line (eg. "From MAILER-DAEMON Fri Jul 8 12:08:34 2011")
- **node.headers.mbox** If this is a POST form-data then this value holds the HTTP prefix line (eg. "POST /upload.php HTTP/1.1")
Additionally you can check the details of the node with the following properties automatically parsed from the headers:
- **node.root** if true then it means this is the message root, so this node should contain Subject, From, To etc. headers
- **node.contentType** returns the mime type of the node (eg. 'text/html')
- **node.disposition** either `'attachment'`, `'inline'` or `false` if not set
- **node.charset** returns the charset of the node as defined in 'Content-Type' header (eg. 'UTF-8') or false if not defined
- **node.encoding** returns the Transfer-Encoding value (eg. 'base64' or 'quoted-printable') or false if not defined
- **node.multipart** if has value, then this is a multipart node (does not have 'body' parts)
- **node.filename** is set if the headers contain a filename value. This is decoded to unicode, so it is a normal string or false if not found
### Join parsed message stream
`Joiner` is a transformable stream where input is the object stream form `Splitter` and output is a byte stream.
```javascript
let Splitter = require('mailsplit').Splitter;
let Joiner = require('mailsplit').Joiner;
let splitter = new Splitter();
let joiner = new Joiner();
// pipe a message source to splitter, then joiner and finally to stdout
someMessagStream
.pipe(splitter)
.pipe(joiner)
.pipe(process.stdout);
```
### Rewrite specific nodes
`Rewriter` is a simple helper class to modify nodes that match a filter function. You can pipe a Splitter stream directly into a Rewriter and pipe Rewriter output to a Joiner.
Rewriter takes the following argument:
- **filterFunc** gets the current node as argument and starts processing it if `filterFunc` returns true
Once Rewriter finds a matching node, it emits the following event:
- _'node'_ with an object argument `data`
- `data.node` includes the current node with headers
- `data.decoder` is the decoder stream that you can read data from
- `data.encoder` is the encoder stream that you can write data to. Whatever you write into that stream will be encoded properly and inserted as the content of the current node
```javascript
let Splitter = require('mailsplit').Splitter;
let Joiner = require('mailsplit').Joiner;
let Rewriter = require('mailsplit').Rewriter;
let splitter = new Splitter();
let joiner = new Joiner();
let rewriter = new Rewriter(node => node.contentType === 'text/html');
rewriter.on('node', data => {
// manage headers with node.headers
node.headers.add('X-Processed-Time', new Date.toISOString());
// do nothing, just reencode existing data
data.decoder.pipe(data.encoder);
});
// pipe a message source to splitter, then rewriter, then joiner and finally to stdout
someMessagStream
.pipe(splitter)
.pipe(rewriter)
.pipe(joiner)
.pipe(process.stdout);
```
### Stream specific nodes
`Streamer` is a simple helper class to stream nodes that match a filter function. You can pipe a Splitter stream directly into a Streamer and pipe Streamer output to a Joiner.
Streamer takes the following argument:
- **filterFunc** gets the current node as argument and starts processing it if `filterFunc` returns true
Once Streamer finds a matching node, it emits the following event:
- _'node'_ with an object argument `data`
- `data.node` includes the current node with headers (informational only, you can't modify it)
- `data.decoder` is the decoder stream that you can read data from
- `data.done` is a function you must call once you have processed the stream
```javascript
let Splitter = require('mailsplit').Splitter;
let Joiner = require('mailsplit').Joiner;
let Streamer = require('mailsplit').Streamer;
let fs = require('fs');
let splitter = new Splitter();
let joiner = new Joiner();
let streamer = new Streamer(node => node.contentType === 'image/jpeg');
streamer.on('node', data => {
// write to file
data.decoder.pipe(fs.createWriteStream(data.node.filename || 'image.jpg'));
data.done();
});
// pipe a message source to splitter, then streamer, then joiner and finally to stdout
someMessagStream
.pipe(splitter)
.pipe(streamer)
.pipe(joiner)
.pipe(process.stdout);
```
### Benchmark
Parsing and re-building messages is not fast but it isn't slow either. On my Macbook Pro I got around 22 MB/second (single process, single parsing queue) when parsing random messages from my own email archive. Time spent includes file calls to find and load random messages from disk.
```
Streaming 20000 random messages through a plain PassThrough
Done. 20000 messages [1244 MB] processed in 10.095 s. with average of 1981 messages/sec [123 MB/s]
Streaming 20000 random messages through Splitter and Joiner
Done. 20000 messages [1244 MB] processed in 55.882 s. with average of 358 messages/sec [22 MB/s]
```
## License
Dual licensed under **MIT** or **EUPLv1.1+**

15
test/imaps/node_modules/mailsplit/index.js generated vendored Normal file
View File

@@ -0,0 +1,15 @@
'use strict';
const MessageSplitter = require('./lib/message-splitter');
const MessageJoiner = require('./lib/message-joiner');
const NodeRewriter = require('./lib/node-rewriter');
const NodeStreamer = require('./lib/node-streamer');
const Headers = require('./lib/headers');
module.exports = {
Splitter: MessageSplitter,
Joiner: MessageJoiner,
Rewriter: NodeRewriter,
Streamer: NodeStreamer,
Headers
};

View File

@@ -0,0 +1,55 @@
'use strict';
// Helper class to rewrite nodes with specific mime type
const Transform = require('stream').Transform;
const libmime = require('libmime');
/**
* Really bad "stream" transform to parse format=flowed content
*
* @constructor
* @param {String} delSp True if delsp option was used
*/
class FlowedDecoder extends Transform {
constructor(config) {
super();
this.config = config || {};
this.chunks = [];
this.chunklen = 0;
this.libmime = new libmime.Libmime({ Iconv: config.Iconv });
}
_transform(chunk, encoding, callback) {
if (!chunk || !chunk.length) {
return callback();
}
if (!encoding !== 'buffer') {
chunk = Buffer.from(chunk, encoding);
}
this.chunks.push(chunk);
this.chunklen += chunk.length;
callback();
}
_flush(callback) {
if (this.chunklen) {
let currentBody = Buffer.concat(this.chunks, this.chunklen);
if (this.config.encoding === 'base64') {
currentBody = Buffer.from(currentBody.toString('binary'), 'base64');
}
let content = this.libmime.decodeFlowed(currentBody.toString('binary'), this.config.delSp);
this.push(Buffer.from(content, 'binary'));
}
return callback();
}
}
module.exports = FlowedDecoder;

234
test/imaps/node_modules/mailsplit/lib/headers.js generated vendored Normal file
View File

@@ -0,0 +1,234 @@
'use strict';
const libmime = require('libmime');
/**
* Class Headers to parse and handle message headers. Headers instance allows to
* check existing, delete or add new headers
*/
class Headers {
constructor(headers, config) {
config = config || {};
if (Array.isArray(headers)) {
// already using parsed headers
this.changed = true;
this.headers = false;
this.parsed = true;
this.lines = headers;
} else {
// using original string/buffer headers
this.changed = false;
this.headers = headers;
this.parsed = false;
this.lines = false;
}
this.mbox = false;
this.http = false;
this.libmime = new libmime.Libmime({ Iconv: config.Iconv });
}
hasHeader(key) {
if (!this.parsed) {
this._parseHeaders();
}
key = this._normalizeHeader(key);
return typeof this.lines.find(line => line.key === key) === 'object';
}
get(key) {
if (!this.parsed) {
this._parseHeaders();
}
key = this._normalizeHeader(key);
let lines = this.lines.filter(line => line.key === key).map(line => line.line);
return lines;
}
getDecoded(key) {
return this.get(key)
.map(line => this.libmime.decodeHeader(line))
.filter(line => line && line.value);
}
getFirst(key) {
if (!this.parsed) {
this._parseHeaders();
}
key = this._normalizeHeader(key);
let header = this.lines.find(line => line.key === key);
if (!header) {
return '';
}
return ((this.libmime.decodeHeader(header.line) || {}).value || '').toString().trim();
}
getList() {
if (!this.parsed) {
this._parseHeaders();
}
return this.lines;
}
add(key, value, index) {
if (typeof value === 'undefined') {
return;
}
if (typeof value === 'number') {
value = value.toString();
}
if (typeof value === 'string') {
value = Buffer.from(value);
}
value = value.toString('binary');
this.addFormatted(key, this.libmime.foldLines(key + ': ' + value.replace(/\r?\n/g, ''), 76, false), index);
}
addFormatted(key, line, index) {
if (!this.parsed) {
this._parseHeaders();
}
index = index || 0;
this.changed = true;
if (!line) {
return;
}
if (typeof line !== 'string') {
line = line.toString('binary');
}
let header = {
key: this._normalizeHeader(key),
line
};
if (index < 1) {
this.lines.unshift(header);
} else if (index >= this.lines.length) {
this.lines.push(header);
} else {
this.lines.splice(index, 0, header);
}
}
remove(key) {
if (!this.parsed) {
this._parseHeaders();
}
key = this._normalizeHeader(key);
for (let i = this.lines.length - 1; i >= 0; i--) {
if (this.lines[i].key === key) {
this.changed = true;
this.lines.splice(i, 1);
}
}
}
update(key, value, relativeIndex) {
if (!this.parsed) {
this._parseHeaders();
}
let keyName = key;
let index = 0;
key = this._normalizeHeader(key);
let relativeIndexCount = 0;
let relativeMatchFound = false;
for (let i = this.lines.length - 1; i >= 0; i--) {
if (this.lines[i].key === key) {
if (relativeIndex && relativeIndex !== relativeIndexCount) {
relativeIndexCount++;
continue;
}
index = i;
this.changed = true;
this.lines.splice(i, 1);
if (relativeIndex) {
relativeMatchFound = true;
break;
}
}
}
if (relativeIndex && !relativeMatchFound) return;
this.add(keyName, value, index);
}
build(lineEnd) {
if (!this.changed && !lineEnd) {
return typeof this.headers === 'string' ? Buffer.from(this.headers, 'binary') : this.headers;
}
if (!this.parsed) {
this._parseHeaders();
}
lineEnd = lineEnd || '\r\n';
let headers = this.lines.map(line => line.line.replace(/\r?\n/g, lineEnd)).join(lineEnd) + `${lineEnd}${lineEnd}`;
if (this.mbox) {
headers = this.mbox + lineEnd + headers;
}
if (this.http) {
headers = this.http + lineEnd + headers;
}
return Buffer.from(headers, 'binary');
}
_normalizeHeader(key) {
return (key || '').toLowerCase().trim();
}
_parseHeaders() {
if (!this.headers) {
this.lines = [];
this.parsed = true;
return;
}
let lines = this.headers
.toString('binary')
.replace(/[\r\n]+$/, '')
.split(/\r?\n/);
for (let i = lines.length - 1; i >= 0; i--) {
let chr = lines[i].charAt(0);
if (i && (chr === ' ' || chr === '\t')) {
lines[i - 1] += '\r\n' + lines[i];
lines.splice(i, 1);
} else {
let line = lines[i];
if (!i && /^From /i.test(line)) {
// mbox file
this.mbox = line;
lines.splice(i, 1);
continue;
} else if (!i && /^POST /i.test(line)) {
// HTTP POST request
this.http = line;
lines.splice(i, 1);
continue;
}
let key = this._normalizeHeader(line.substr(0, line.indexOf(':')));
lines[i] = {
key,
line
};
}
}
this.lines = lines;
this.parsed = true;
}
}
// expose to the world
module.exports = Headers;

View File

@@ -0,0 +1,30 @@
'use strict';
const Transform = require('stream').Transform;
class MessageJoiner extends Transform {
constructor() {
let options = {
readableObjectMode: false,
writableObjectMode: true
};
super(options);
}
_transform(obj, encoding, callback) {
if (Buffer.isBuffer(obj)) {
this.push(obj);
} else if (obj.type === 'node') {
this.push(obj.getHeaders());
} else if (obj.value) {
this.push(obj.value);
}
return callback();
}
_flush(callback) {
return callback();
}
}
module.exports = MessageJoiner;

View File

@@ -0,0 +1,422 @@
'use strict';
const Transform = require('stream').Transform;
const MimeNode = require('./mime-node');
const MAX_HEAD_SIZE = 1 * 1024 * 1024;
const MAX_CHILD_NODES = 1000;
const HEAD = 0x01;
const BODY = 0x02;
class MessageSplitter extends Transform {
constructor(config) {
let options = {
readableObjectMode: true,
writableObjectMode: false
};
super(options);
this.config = config || {};
this.maxHeadSize = this.config.maxHeadSize || MAX_HEAD_SIZE;
this.maxChildNodes = this.config.maxChildNodes || MAX_CHILD_NODES;
this.tree = [];
this.nodeCounter = 0;
this.newNode();
this.tree.push(this.node);
this.line = false;
this.hasFailed = false;
}
_transform(chunk, encoding, callback) {
// process line by line
// find next line ending
let pos = 0;
let i = 0;
let group = {
type: 'none'
};
let groupstart = this.line ? -this.line.length : 0;
let groupend = 0;
let checkTrailingLinebreak = data => {
if (data.type === 'body' && data.node.parentNode && data.value && data.value.length) {
if (data.value[data.value.length - 1] === 0x0a) {
groupstart--;
groupend--;
pos--;
if (data.value.length > 1 && data.value[data.value.length - 2] === 0x0d) {
groupstart--;
groupend--;
pos--;
if (groupstart < 0 && !this.line) {
// store only <CR> as <LF> should be on the positive side
this.line = Buffer.allocUnsafe(1);
this.line[0] = 0x0d;
}
data.value = data.value.slice(0, data.value.length - 2);
} else {
data.value = data.value.slice(0, data.value.length - 1);
}
} else if (data.value[data.value.length - 1] === 0x0d) {
groupstart--;
groupend--;
pos--;
data.value = data.value.slice(0, data.value.length - 1);
}
}
};
let iterateData = () => {
for (let len = chunk.length; i < len; i++) {
// find next <LF>
if (chunk[i] === 0x0a) {
// line end
let start = Math.max(pos, 0);
pos = ++i;
return this.processLine(chunk.slice(start, i), false, (err, data, flush) => {
if (err) {
this.hasFailed = true;
return setImmediate(() => callback(err));
}
if (!data) {
return setImmediate(iterateData);
}
if (flush) {
if (group && group.type !== 'none') {
if (group.type === 'body' && groupend >= groupstart && group.node.parentNode) {
// do not include the last line ending for body
if (chunk[groupend - 1] === 0x0a) {
groupend--;
if (groupend >= groupstart && chunk[groupend - 1] === 0x0d) {
groupend--;
}
}
}
if (groupstart !== groupend) {
group.value = chunk.slice(groupstart, groupend);
if (groupend < i) {
data.value = chunk.slice(groupend, i);
}
}
this.push(group);
group = {
type: 'none'
};
groupstart = groupend = i;
}
this.push(data);
groupend = i;
return setImmediate(iterateData);
}
if (data.type === group.type) {
// shift slice end position forward
groupend = i;
} else {
if (group.type === 'body' && groupend >= groupstart && group.node.parentNode) {
// do not include the last line ending for body
if (chunk[groupend - 1] === 0x0a) {
groupend--;
if (groupend >= groupstart && chunk[groupend - 1] === 0x0d) {
groupend--;
}
}
}
if (group.type !== 'none' && group.type !== 'node') {
// we have a previous data/body chunk to output
if (groupstart !== groupend) {
group.value = chunk.slice(groupstart, groupend);
if (group.value && group.value.length) {
this.push(group);
group = {
type: 'none'
};
}
}
}
if (data.type === 'node') {
this.push(data);
groupstart = i;
groupend = i;
} else if (groupstart < 0) {
groupstart = i;
groupend = i;
checkTrailingLinebreak(data);
if (data.value && data.value.length) {
this.push(data);
}
} else {
// start new body/data chunk
group = data;
groupstart = groupend;
groupend = i;
}
}
return setImmediate(iterateData);
});
}
}
// skip last linebreak for body
if (pos >= groupstart + 1 && group.type === 'body' && group.node.parentNode) {
// do not include the last line ending for body
if (chunk[pos - 1] === 0x0a) {
pos--;
if (pos >= groupstart && chunk[pos - 1] === 0x0d) {
pos--;
}
}
}
if (group.type !== 'none' && group.type !== 'node' && pos > groupstart) {
// we have a leftover data/body chunk to push out
group.value = chunk.slice(groupstart, pos);
if (group.value && group.value.length) {
this.push(group);
group = {
type: 'none'
};
}
}
if (pos < chunk.length) {
if (this.line) {
this.line = Buffer.concat([this.line, chunk.slice(pos)]);
} else {
this.line = chunk.slice(pos);
}
}
callback();
};
setImmediate(iterateData);
}
_flush(callback) {
if (this.hasFailed) {
return callback();
}
this.processLine(false, true, (err, data) => {
if (err) {
return setImmediate(() => callback(err));
}
if (data && (data.type === 'node' || (data.value && data.value.length))) {
this.push(data);
}
callback();
});
}
compareBoundary(line, startpos, boundary) {
// --{boundary}\r\n or --{boundary}--\r\n
if (line.length < boundary.length + 3 + startpos || line.length > boundary.length + 6 + startpos) {
return false;
}
for (let i = 0; i < boundary.length; i++) {
if (line[i + 2 + startpos] !== boundary[i]) {
return false;
}
}
let pos = 0;
for (let i = boundary.length + 2 + startpos; i < line.length; i++) {
let c = line[i];
if (pos === 0 && (c === 0x0d || c === 0x0a)) {
// 1: next node
return 1;
}
if (pos === 0 && c !== 0x2d) {
// expecting "-"
return false;
}
if (pos === 1 && c !== 0x2d) {
// expecting "-"
return false;
}
if (pos === 2 && c !== 0x0d && c !== 0x0a) {
// expecting line terminator, either <CR> or <LF>
return false;
}
if (pos === 3 && c !== 0x0a) {
// expecting line terminator <LF>
return false;
}
pos++;
}
// 2: multipart end
return 2;
}
checkBoundary(line) {
let startpos = 0;
if (line.length >= 1 && (line[0] === 0x0d || line[0] === 0x0a)) {
startpos++;
if (line.length >= 2 && (line[0] === 0x0d || line[1] === 0x0a)) {
startpos++;
}
}
if (line.length < 4 || line[startpos] !== 0x2d || line[startpos + 1] !== 0x2d) {
// defnitely not a boundary
return false;
}
let boundary;
if (this.node._boundary && (boundary = this.compareBoundary(line, startpos, this.node._boundary))) {
// 1: next child
// 2: multipart end
return boundary;
}
if (this.node._parentBoundary && (boundary = this.compareBoundary(line, startpos, this.node._parentBoundary))) {
// 3: next sibling
// 4: parent end
return boundary + 2;
}
return false;
}
processLine(line, final, next) {
let flush = false;
if (this.line && line) {
line = Buffer.concat([this.line, line]);
this.line = false;
} else if (this.line && !line) {
line = this.line;
this.line = false;
}
if (!line) {
line = Buffer.alloc(0);
}
if (this.nodeCounter > this.maxChildNodes) {
let err = new Error('Max allowed child nodes exceeded');
err.code = 'EMAXLEN';
return next(err);
}
// we check boundary outside the HEAD/BODY scope as it may appear anywhere
let boundary = this.checkBoundary(line);
if (boundary) {
// reached boundary, switch context
switch (boundary) {
case 1:
// next child
this.newNode(this.node);
flush = true;
break;
case 2:
// reached end of children, keep current node
break;
case 3: {
// next sibling
let parentNode = this.node.parentNode;
if (parentNode && parentNode.contentType === 'message/rfc822') {
// special case where immediate parent is an inline message block
// move up another step
parentNode = parentNode.parentNode;
}
this.newNode(parentNode);
flush = true;
break;
}
case 4:
// special case when boundary close a node with only header.
if (this.node && this.node._headerlen && !this.node.headers) {
this.node.parseHeaders();
this.push(this.node);
}
// move up
if (this.tree.length) {
this.node = this.tree.pop();
}
this.state = BODY;
break;
}
return next(
null,
{
node: this.node,
type: 'data',
value: line
},
flush
);
}
switch (this.state) {
case HEAD: {
this.node.addHeaderChunk(line);
if (this.node._headerlen > this.maxHeadSize) {
let err = new Error('Max header size for a MIME node exceeded');
err.code = 'EMAXLEN';
return next(err);
}
if (final || (line.length === 1 && line[0] === 0x0a) || (line.length === 2 && line[0] === 0x0d && line[1] === 0x0a)) {
let currentNode = this.node;
currentNode.parseHeaders();
// if the content is attached message then just continue
if (
currentNode.contentType === 'message/rfc822' &&
!this.config.ignoreEmbedded &&
(!currentNode.encoding || ['7bit', '8bit', 'binary'].includes(currentNode.encoding)) &&
(this.config.defaultInlineEmbedded ? currentNode.disposition !== 'attachment' : currentNode.disposition === 'inline')
) {
currentNode.messageNode = true;
this.newNode(currentNode);
if (currentNode.parentNode) {
this.node._parentBoundary = currentNode.parentNode._boundary;
}
} else {
if (currentNode.contentType === 'message/rfc822') {
currentNode.messageNode = false;
}
this.state = BODY;
if (currentNode.multipart && currentNode._boundary) {
this.tree.push(currentNode);
}
}
return next(null, currentNode, flush);
}
return next();
}
case BODY: {
return next(
null,
{
node: this.node,
type: this.node.multipart ? 'data' : 'body',
value: line
},
flush
);
}
}
next(null, false);
}
newNode(parent) {
this.node = new MimeNode(parent || false, this.config);
this.state = HEAD;
this.nodeCounter++;
}
}
module.exports = MessageSplitter;

265
test/imaps/node_modules/mailsplit/lib/mime-node.js generated vendored Normal file
View File

@@ -0,0 +1,265 @@
'use strict';
const Headers = require('./headers');
const libmime = require('libmime');
const libqp = require('libqp');
const libbase64 = require('libbase64');
const PassThrough = require('stream').PassThrough;
const pathlib = require('path');
class MimeNode {
constructor(parentNode, config) {
this.type = 'node';
this.root = !parentNode;
this.parentNode = parentNode;
this._parentBoundary = this.parentNode && this.parentNode._boundary;
this._headersLines = [];
this._headerlen = 0;
this._parsedContentType = false;
this._boundary = false;
this.multipart = false;
this.encoding = false;
this.headers = false;
this.contentType = false;
this.flowed = false;
this.delSp = false;
this.config = config || {};
this.libmime = new libmime.Libmime({ Iconv: this.config.Iconv });
this.parentPartNumber = (parentNode && this.partNr) || [];
this.partNr = false; // resolved later
this.childPartNumbers = 0;
}
getPartNr(provided) {
if (provided) {
return []
.concat(this.partNr || [])
.filter(nr => !isNaN(nr))
.concat(provided);
}
let childPartNr = ++this.childPartNumbers;
return []
.concat(this.partNr || [])
.filter(nr => !isNaN(nr))
.concat(childPartNr);
}
addHeaderChunk(line) {
if (!line) {
return;
}
this._headersLines.push(line);
this._headerlen += line.length;
}
parseHeaders() {
if (this.headers) {
return;
}
this.headers = new Headers(Buffer.concat(this._headersLines, this._headerlen), this.config);
this._parsedContentDisposition = this.libmime.parseHeaderValue(this.headers.getFirst('Content-Disposition'));
// if content-type is missing default to plaintext
let contentHeader;
if (this.headers.get('Content-Type').length) {
contentHeader = this.headers.getFirst('Content-Type');
} else {
if (this._parsedContentDisposition.params.filename) {
let extension = pathlib.parse(this._parsedContentDisposition.params.filename).ext.replace(/^\./, '');
if (extension) {
contentHeader = libmime.detectMimeType(extension);
}
}
if (!contentHeader) {
if (/^attachment$/i.test(this._parsedContentDisposition.value)) {
contentHeader = 'application/octet-stream';
} else {
contentHeader = 'text/plain';
}
}
}
this._parsedContentType = this.libmime.parseHeaderValue(contentHeader);
this.encoding = this.headers
.getFirst('Content-Transfer-Encoding')
.replace(/\(.*\)/g, '')
.toLowerCase()
.trim();
this.contentType = (this._parsedContentType.value || '').toLowerCase().trim() || false;
this.charset = this._parsedContentType.params.charset || false;
this.disposition = (this._parsedContentDisposition.value || '').toLowerCase().trim() || false;
// fix invalidly encoded disposition values
if (this.disposition) {
try {
this.disposition = this.libmime.decodeWords(this.disposition);
} catch (E) {
// failed to parse disposition, keep as is (most probably an unknown charset is used)
}
}
this.filename = this._parsedContentDisposition.params.filename || this._parsedContentType.params.name || false;
if (this._parsedContentType.params.format && this._parsedContentType.params.format.toLowerCase().trim() === 'flowed') {
this.flowed = true;
if (this._parsedContentType.params.delsp && this._parsedContentType.params.delsp.toLowerCase().trim() === 'yes') {
this.delSp = true;
}
}
if (this.filename) {
try {
this.filename = this.libmime.decodeWords(this.filename);
} catch (E) {
// failed to parse filename, keep as is (most probably an unknown charset is used)
}
}
this.multipart =
(this.contentType &&
this.contentType.substr(0, this.contentType.indexOf('/')) === 'multipart' &&
this.contentType.substr(this.contentType.indexOf('/') + 1)) ||
false;
this._boundary = (this._parsedContentType.params.boundary && Buffer.from(this._parsedContentType.params.boundary)) || false;
this.rfc822 = this.contentType === 'message/rfc822';
if (!this.parentNode || this.parentNode.rfc822) {
this.partNr = this.parentNode ? this.parentNode.getPartNr('TEXT') : ['TEXT'];
} else {
this.partNr = this.parentNode ? this.parentNode.getPartNr() : [];
}
}
getHeaders() {
if (!this.headers) {
this.parseHeaders();
}
return this.headers.build();
}
setContentType(contentType) {
if (!this.headers) {
this.parseHeaders();
}
contentType = (contentType || '').toLowerCase().trim();
if (contentType) {
this._parsedContentType.value = contentType;
}
if (!this.flowed && this._parsedContentType.params.format) {
delete this._parsedContentType.params.format;
}
if (!this.delSp && this._parsedContentType.params.delsp) {
delete this._parsedContentType.params.delsp;
}
this.headers.update('Content-Type', this.libmime.buildHeaderValue(this._parsedContentType));
}
setCharset(charset) {
if (!this.headers) {
this.parseHeaders();
}
charset = (charset || '').toLowerCase().trim();
if (charset === 'ascii') {
charset = '';
}
if (!charset) {
if (!this._parsedContentType.value) {
// nothing to set or update
return;
}
delete this._parsedContentType.params.charset;
} else {
this._parsedContentType.params.charset = charset;
}
if (!this._parsedContentType.value) {
this._parsedContentType.value = 'text/plain';
}
this.headers.update('Content-Type', this.libmime.buildHeaderValue(this._parsedContentType));
}
setFilename(filename) {
if (!this.headers) {
this.parseHeaders();
}
this.filename = (filename || '').toLowerCase().trim();
if (this._parsedContentType.params.name) {
delete this._parsedContentType.params.name;
this.headers.update('Content-Type', this.libmime.buildHeaderValue(this._parsedContentType));
}
if (!this.filename) {
if (!this._parsedContentDisposition.value) {
// nothing to set or update
return;
}
delete this._parsedContentDisposition.params.filename;
} else {
this._parsedContentDisposition.params.filename = this.filename;
}
if (!this._parsedContentDisposition.value) {
this._parsedContentDisposition.value = 'attachment';
}
this.headers.update('Content-Disposition', this.libmime.buildHeaderValue(this._parsedContentDisposition));
}
getDecoder() {
if (!this.headers) {
this.parseHeaders();
}
switch (this.encoding) {
case 'base64':
return new libbase64.Decoder();
case 'quoted-printable':
return new libqp.Decoder();
default:
return new PassThrough();
}
}
getEncoder(encoding) {
if (!this.headers) {
this.parseHeaders();
}
encoding = (encoding || '').toString().toLowerCase().trim();
if (encoding && encoding !== this.encoding) {
this.headers.update('Content-Transfer-Encoding', encoding);
} else {
encoding = this.encoding;
}
switch (encoding) {
case 'base64':
return new libbase64.Encoder();
case 'quoted-printable':
return new libqp.Encoder();
default:
return new PassThrough();
}
}
}
module.exports = MimeNode;

194
test/imaps/node_modules/mailsplit/lib/node-rewriter.js generated vendored Normal file
View File

@@ -0,0 +1,194 @@
'use strict';
// Helper class to rewrite nodes with specific mime type
const Transform = require('stream').Transform;
const FlowedDecoder = require('./flowed-decoder');
/**
* NodeRewriter Transform stream. Updates content for all nodes with specified mime type
*
* @constructor
* @param {String} mimeType Define the Mime-Type to look for
* @param {Function} rewriteAction Function to run with the node content
*/
class NodeRewriter extends Transform {
constructor(filterFunc, rewriteAction) {
let options = {
readableObjectMode: true,
writableObjectMode: true
};
super(options);
this.filterFunc = filterFunc;
this.rewriteAction = rewriteAction;
this.decoder = false;
this.encoder = false;
this.continue = false;
}
_transform(data, encoding, callback) {
this.processIncoming(data, callback);
}
_flush(callback) {
if (this.decoder) {
// emit an empty node just in case there is pending data to end
return this.processIncoming(
{
type: 'none'
},
callback
);
}
return callback();
}
processIncoming(data, callback) {
if (this.decoder && data.type === 'body') {
// data to parse
if (!this.decoder.write(data.value)) {
return this.decoder.once('drain', callback);
} else {
return callback();
}
} else if (this.decoder && data.type !== 'body') {
// stop decoding.
// we can not process the current data chunk as we need to wait until
// the parsed data is completely processed, so we store a reference to the
// continue callback
this.continue = () => {
this.continue = false;
this.decoder = false;
this.encoder = false;
this.processIncoming(data, callback);
};
return this.decoder.end();
} else if (data.type === 'node' && this.filterFunc(data)) {
// found matching node, create new handler
this.emit('node', this.createDecodePair(data));
} else if (this.readable && data.type !== 'none') {
// we don't care about this data, just pass it over to the joiner
this.push(data);
}
callback();
}
createDecodePair(node) {
this.decoder = node.getDecoder();
if (['base64', 'quoted-printable'].includes(node.encoding)) {
this.encoder = node.getEncoder();
} else {
this.encoder = node.getEncoder('quoted-printable');
}
let lastByte = false;
let decoder = this.decoder;
let encoder = this.encoder;
let firstChunk = true;
decoder.$reading = false;
let readFromEncoder = () => {
decoder.$reading = true;
let data = encoder.read();
if (data === null) {
decoder.$reading = false;
return;
}
if (firstChunk) {
firstChunk = false;
if (this.readable) {
this.push(node);
if (node.type === 'body') {
lastByte = node.value && node.value.length && node.value[node.value.length - 1];
}
}
}
let writeMore = true;
if (this.readable) {
writeMore = this.push({
node,
type: 'body',
value: data
});
lastByte = data && data.length && data[data.length - 1];
}
if (writeMore) {
return setImmediate(readFromEncoder);
} else {
encoder.pause();
// no idea how to catch drain? use timeout for now as poor man's substitute
// this.once('drain', () => encoder.resume());
setTimeout(() => {
encoder.resume();
setImmediate(readFromEncoder);
}, 100);
}
};
encoder.on('readable', () => {
if (!decoder.$reading) {
return readFromEncoder();
}
});
encoder.on('end', () => {
if (firstChunk) {
firstChunk = false;
if (this.readable) {
this.push(node);
if (node.type === 'body') {
lastByte = node.value && node.value.length && node.value[node.value.length - 1];
}
}
}
if (lastByte !== 0x0a) {
// make sure there is a terminating line break
this.push({
node,
type: 'body',
value: Buffer.from([0x0a])
});
}
if (this.continue) {
return this.continue();
}
});
if (/^text\//.test(node.contentType) && node.flowed) {
// text/plain; format=flowed is a special case
let flowDecoder = decoder;
decoder = new FlowedDecoder({
delSp: node.delSp,
encoding: node.encoding
});
flowDecoder.on('error', err => {
decoder.emit('error', err);
});
flowDecoder.pipe(decoder);
// we don't know what kind of data we are going to get, does it comply with the
// requirements of format=flowed, so we just cancel it
node.flowed = false;
node.delSp = false;
node.setContentType();
}
return {
node,
decoder,
encoder
};
}
}
module.exports = NodeRewriter;

121
test/imaps/node_modules/mailsplit/lib/node-streamer.js generated vendored Normal file
View File

@@ -0,0 +1,121 @@
'use strict';
// Helper class to rewrite nodes with specific mime type
const Transform = require('stream').Transform;
const FlowedDecoder = require('./flowed-decoder');
/**
* NodeRewriter Transform stream. Updates content for all nodes with specified mime type
*
* @constructor
* @param {String} mimeType Define the Mime-Type to look for
* @param {Function} streamAction Function to run with the node content
*/
class NodeStreamer extends Transform {
constructor(filterFunc, streamAction) {
let options = {
readableObjectMode: true,
writableObjectMode: true
};
super(options);
this.filterFunc = filterFunc;
this.streamAction = streamAction;
this.decoder = false;
this.canContinue = false;
this.continue = false;
}
_transform(data, encoding, callback) {
this.processIncoming(data, callback);
}
_flush(callback) {
if (this.decoder) {
// emit an empty node just in case there is pending data to end
return this.processIncoming(
{
type: 'none'
},
callback
);
}
return callback();
}
processIncoming(data, callback) {
if (this.decoder && data.type === 'body') {
// data to parse
this.push(data);
if (!this.decoder.write(data.value)) {
return this.decoder.once('drain', callback);
} else {
return callback();
}
} else if (this.decoder && data.type !== 'body') {
// stop decoding.
// we can not process the current data chunk as we need to wait until
// the parsed data is completely processed, so we store a reference to the
// continue callback
let doContinue = () => {
this.continue = false;
this.decoder = false;
this.canContinue = false;
this.processIncoming(data, callback);
};
if (this.canContinue) {
setImmediate(doContinue);
} else {
this.continue = () => doContinue();
}
return this.decoder.end();
} else if (data.type === 'node' && this.filterFunc(data)) {
this.push(data);
// found matching node, create new handler
this.emit('node', this.createDecoder(data));
} else if (this.readable && data.type !== 'none') {
// we don't care about this data, just pass it over to the joiner
this.push(data);
}
callback();
}
createDecoder(node) {
this.decoder = node.getDecoder();
let decoder = this.decoder;
decoder.$reading = false;
if (/^text\//.test(node.contentType) && node.flowed) {
let flowDecoder = decoder;
decoder = new FlowedDecoder({
delSp: node.delSp
});
flowDecoder.on('error', err => {
decoder.emit('error', err);
});
flowDecoder.pipe(decoder);
}
return {
node,
decoder,
done: () => {
if (typeof this.continue === 'function') {
// called once input stream is processed
this.continue();
} else {
// called before input stream is processed
this.canContinue = true;
}
}
};
}
}
module.exports = NodeStreamer;

View File

@@ -0,0 +1,7 @@
{
"rules": {
"indent": 0,
"no-prototype-builtins": 0
},
"extends": "nodemailer"
}

View File

@@ -0,0 +1 @@
*.js text eol=lf

View File

@@ -0,0 +1,4 @@
# These are supported funding model platforms
github: [andris9] # enable once enrolled
custom: ['https://www.paypal.me/nodemailer']

View File

@@ -0,0 +1,21 @@
name: Run tests
on:
push:
pull_request:
jobs:
test:
strategy:
matrix:
node: [12.x, 14.x, 16.x, 18.x]
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node }}
- run: npm install
- run: npm test

View File

@@ -0,0 +1,8 @@
module.exports = {
printWidth: 160,
tabWidth: 4,
singleQuote: true,
endOfLine: 'lf',
trailingComma: 'none',
arrowParens: 'avoid'
};

View File

@@ -0,0 +1,110 @@
# Changelog
## v5.2.0 2022-12-08
- Bumped libqp to get rid of `new Buffer` warnings
## v5.1.0 2022-04-28
- Bumped deps
- Removed Travis config
- Added Github actions file to run tests
## v5.0.0 2020-07-22
- Removed optional node-iconv support
- Bumped dependencies
- Updated Travis test matrix, dropped Node 8
## v4.2.1 2019-10-28
- Replace jconv with more recent encoding-japanese
## v4.2.0 2019-10-28
- Use jconv module to parse ISO-2022-JP by default
## v4.1.4 2019-10-28
- decodeWords should also decode empty content part [WeiAnAn](9bbcfd2)
- fix decode base64 ending with = [WeiAnAn](6e656e2)
## v4.1.0 2019-05-01
- Experimental support for node-iconv
## v4.0.1 2018-07-24
- Maintenance release. Bumped deps
## v4.0.0 2018-06-11
- Refactored decoding of mime encoded words and parameter continuation strings
## v3.0.0 2016-12-08
- Updated encoded-word generation. Previously a minimal value was encoded, so it was possible to have multiple encoded words in a string separated by non encoded-words. This was an issue with some webmail clients that stripped out the non-encoded parts between encoded-words so the updated method uses wide match by encoding from the first word with unicode characters to the last word. "a =?b?= c =?d?= e" -> "a =?bcd?= e"
## v2.1.3 2016-12-08
- Revert dot as a special symbol
## v2.1.2 2016-11-21
- Quote special symbols as defined in RFC (surajwy)
## v2.1.1 2016-11-15
- Fixed issue with special symbols in attachment filenames
## v2.1.0 2016-07-24
- Changed handling of base64 encoded mime words where multiple words are joined together if possible. This fixes issues with multi byte characters getting split into different mime words (against the RFC but occurs)
## v2.0.3 2016-02-29
- Fixed an issue with rfc2231 filenames
## v2.0.2 2016-02-11
- Fixed an issue with base64 mime words encoding
## v2.0.1 2016-02-11
- Fix base64 mime-word encoding. Final string length was calculated invalidly
## v2.0.0 2016-01-04
- Replaced jshint with eslint
- Refactored file structure
## v1.2.1 2015-10-05
Added support for emojis in header params (eg. filenames)
## v1.2.0 2015-10-05
Added support for emojis in header params (eg. filenames)
## v1.1.0 2015-09-24
Updated encoded word encoding with quoted printable, should be more like required in https://tools.ietf.org/html/rfc2047#section-5
## v1.0.0 2015-04-15
Changed versioning scheme to use 1.x instead of 0.x versions. Bumped dependency versions, no actual code changes.
## v0.1.7 2015-01-19
Updated unicode filename handling only revert to parameter continuation if the value actually includes
non-ascii characters or is too long. Previously filenames were encoded if they included anything
besides letters, numbers, dot or space.
## v0.1.6 2014-10-25
Fixed an issue with `encodeWords` where a trailing space was invalidly included in a word if the word
ended with an non-ascii character.
## v0.1.5 2014-09-12
Do not use quotes for continuation encoded filename parts. Fixes an issue with Gmail where the Gmail webmail keeps the charset as part of the filename.

View File

@@ -0,0 +1,19 @@
Copyright (c) 2014-2016 Andris Reinman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,207 @@
# libmime
`libmime` provides useful MIME related functions. For Quoted-Printable and Base64 encoding and decoding see [libqp](https://github.com/andris9/libqp) and [libbase64](https://github.com/andris9/libabase64).
## Installation
### [npm](https://www.npmjs.org/):
npm install libmime
## Usage
var libmime = require('libmime');
## Methods
### Encoded Words
#### #encodeWord
Encodes a string into mime [encoded word](http://en.wikipedia.org/wiki/MIME#Encoded-Word) format.
libmime.encodeWord(str [, mimeWordEncoding[, maxLength]]) → String
* **str** - String or Buffer to be encoded
* **mimeWordEncoding** - Encoding for the mime word, either Q or B (default is 'Q')
* **maxLength** - If set, split mime words into several chunks if needed
**Example**
libmime.encodeWord('See on õhin test', 'Q');
Becomes with UTF-8 and Quoted-printable encoding
=?UTF-8?Q?See_on_=C3=B5hin_test?=
#### #encodeWords
Encodes non ascii sequences in a string to mime words.
libmime.encodeWords(str[, mimeWordEncoding[, maxLength]) → String
* **str** - String or Buffer to be encoded
* **mimeWordEncoding** - Encoding for the mime word, either Q or B (default is 'Q')
* **maxLength** - If set, split mime words into several chunks if needed
#### #decodeWords
Decodes a string that might include one or several mime words. If no mime words are found from the string, the original string is returned
libmime.decodeWords(str) → String
* **str** - String to be decoded
### Folding
#### #foldLines
Folds a long line according to the [RFC 5322](http://tools.ietf.org/html/rfc5322#section-2.1.1). Mostly needed for folding header lines.
libmime.foldLines(str [, lineLength[, afterSpace]]) → String
* **str** - String to be folded
* **lineLength** - Maximum length of a line (defaults to 76)
* **afterSpace** - If true, leave a space in the end of a line
**Example**
libmime.foldLines('Content-Type: multipart/alternative; boundary="----zzzz----"')
results in
Content-Type: multipart/alternative;
boundary="----zzzz----"
#### #encodeFlowed
Adds soft line breaks to content marked with `format=flowed` options to ensure that no line in the message is never longer than lineLength.
libmime.encodeFlowed(str [, lineLength]) → String
* **str** Plaintext string that requires wrapping
* **lineLength** (defaults to 76) Maximum length of a line
#### #decodeFlowed
Unwraps a plaintext string in format=flowed wrapping.
libmime.decodeFlowed(str [, delSp]) → String
* **str** Plaintext string with format=flowed to decode
* **delSp** If true, delete leading spaces (delsp=yes)
### Headers
#### #decodeHeader
Unfolds a header line and splits it to key and value pair. The return value is in the form of `{key: 'subject', value: 'test'}`. The value is not mime word decoded, you need to do your own decoding based on the rules for the specific header key.
libmime.decodeHeader(headerLine) → Object
* **headerLine** - Single header line, might include linebreaks as well if folded
#### #decodeHeaders
Parses a block of header lines. Does not decode mime words as every header might have its own rules (eg. formatted email addresses and such).
Return value is an object of headers, where header keys are object keys and values are arrays.
libmime.decodeHeaders(headers) → Object
* **headers** - Headers string
#### #parseHeaderValue
Parses a header value with `key=value` arguments into a structured object. Useful when dealing with
`content-type` and such. Continuation encoded params are joined into mime encoded words.
parseHeaderValue(valueString) → Object
* **valueString** - a header value without the key
**Example**
```javascript
parseHeaderValue('content-type: text/plain; CHARSET="UTF-8"');
```
Outputs
```json
{
"value": "text/plain",
"params": {
"charset": "UTF-8"
}
}
```
#### #buildHeaderValue
Joins structured header value together as 'value; param1=value1; param2=value2'
buildHeaderValue(structuredHeader) → String
* **structuredHeader** - a header value formatted with `parseHeaderValue`
`filename` argument is encoded with continuation encoding if needed
#### #buildHeaderParam
Encodes and splits a header param value according to [RFC2231](https://tools.ietf.org/html/rfc2231#section-3) Parameter Value Continuations.
libmime.buildHeaderParam(key, str, maxLength) → Array
* **key** - Parameter key (eg. `filename`)
* **str** - String or an Buffer value to encode
* **maxLength** - Maximum length of the encoded string part (not line length). Defaults to 50
The method returns an array of encoded parts with the following structure: `[{key:'...', value: '...'}]`
**Example**
```
libmime.buildHeaderParam('filename', 'filename õäöü.txt', 20);
[ { key: 'filename*0*', value: 'utf-8\'\'filename%20' },
{ key: 'filename*1*', value: '%C3%B5%C3%A4%C3%B6' },
{ key: 'filename*2*', value: '%C3%BC.txt' } ]
```
This can be combined into a properly formatted header:
```
Content-disposition: attachment; filename*0*=utf-8''filename%20
filename*1*=%C3%B5%C3%A4%C3%B6; filename*2*=%C3%BC.txt
```
### MIME Types
#### #detectExtension
Returns file extension for a content type string. If no suitable extensions are found, 'bin' is used as the default extension.
libmime.detectExtension(mimeType) → String
* **mimeType** - Content type to be checked for
**Example**
libmime.detectExtension('image/jpeg') // returns 'jpeg'
#### #detectMimeType
Returns content type for a file extension. If no suitable content types are found, 'application/octet-stream' is used as the default content type
libmime.detectMimeType(extension) → String
* **extension** Extension (or filename) to be checked for
**Example**
libmime.detectExtension('logo.jpg') // returns 'image/jpeg'
## License
**MIT**

View File

@@ -0,0 +1,117 @@
'use strict';
const iconv = require('iconv-lite');
const encodingJapanese = require('encoding-japanese');
const charsets = require('./charsets');
/**
* Character set encoding and decoding functions
*/
const charset = (module.exports = {
/**
* Encodes an unicode string into an Buffer object as UTF-8
*
* We force UTF-8 here, no strange encodings allowed.
*
* @param {String} str String to be encoded
* @return {Buffer} UTF-8 encoded typed array
*/
encode(str) {
return Buffer.from(str, 'utf-8');
},
/**
* Decodes a string from Buffer to an unicode string using specified encoding
* NB! Throws if unknown charset is used
*
* @param {Buffer} buf Binary data to be decoded
* @param {String} [fromCharset='UTF-8'] Binary data is decoded into string using this charset
* @return {String} Decded string
*/
decode(buf, fromCharset) {
fromCharset = charset.normalizeCharset(fromCharset || 'UTF-8');
if (/^(us-)?ascii|utf-8|7bit$/i.test(fromCharset)) {
return buf.toString('utf-8');
}
try {
if (/^jis|^iso-?2022-?jp|^EUCJP/i.test(fromCharset)) {
if (typeof buf === 'string') {
buf = Buffer.from(buf);
}
try {
let output = encodingJapanese.convert(buf, {
to: 'UNICODE',
from: fromCharset,
type: 'string'
});
if (typeof output === 'string') {
output = Buffer.from(output);
}
return output;
} catch (err) {
// ignore, defaults to iconv-lite on error
}
}
return iconv.decode(buf, fromCharset);
} catch (err) {
// enforce utf-8, data loss might occur
return buf.toString();
}
},
/**
* Convert a string from specific encoding to UTF-8 Buffer
*
* @param {String|Buffer} str String to be encoded
* @param {String} [fromCharset='UTF-8'] Source encoding for the string
* @return {Buffer} UTF-8 encoded typed array
*/
convert(data, fromCharset) {
fromCharset = charset.normalizeCharset(fromCharset || 'UTF-8');
let bufString;
if (typeof data !== 'string') {
if (/^(us-)?ascii|utf-8|7bit$/i.test(fromCharset)) {
return data;
}
bufString = charset.decode(data, fromCharset);
return charset.encode(bufString);
}
return charset.encode(data);
},
/**
* Converts well known invalid character set names to proper names.
* eg. win-1257 will be converted to WINDOWS-1257
*
* @param {String} charset Charset name to convert
* @return {String} Canoninicalized charset name
*/
normalizeCharset(charset) {
charset = charset.toLowerCase().trim();
// first pass
if (charsets.hasOwnProperty(charset) && charsets[charset]) {
return charsets[charset];
}
charset = charset
.replace(/^utf[-_]?(\d+)/, 'utf-$1')
.replace(/^(?:us[-_]?)ascii/, 'windows-1252')
.replace(/^win(?:dows)?[-_]?(\d+)/, 'windows-$1')
.replace(/^(?:latin|iso[-_]?8859)?[-_]?(\d+)/, 'iso-8859-$1')
.replace(/^l[-_]?(\d+)/, 'iso-8859-$1');
// updated pass
if (charsets.hasOwnProperty(charset) && charsets[charset]) {
return charsets[charset];
}
return charset.toUpperCase();
}
});

View File

@@ -0,0 +1,212 @@
/* eslint quote-props: 0*/
'use strict';
module.exports = {
'866': 'IBM866',
'unicode-1-1-utf-8': 'UTF-8',
'utf-8': 'UTF-8',
utf8: 'UTF-8',
cp866: 'IBM866',
csibm866: 'IBM866',
ibm866: 'IBM866',
csisolatin2: 'ISO-8859-2',
'iso-8859-2': 'ISO-8859-2',
'iso-ir-101': 'ISO-8859-2',
'iso8859-2': 'ISO-8859-2',
iso88592: 'ISO-8859-2',
'iso_8859-2': 'ISO-8859-2',
'iso_8859-2:1987': 'ISO-8859-2',
l2: 'ISO-8859-2',
latin2: 'ISO-8859-2',
csisolatin3: 'ISO-8859-3',
'iso-8859-3': 'ISO-8859-3',
'iso-ir-109': 'ISO-8859-3',
'iso8859-3': 'ISO-8859-3',
iso88593: 'ISO-8859-3',
'iso_8859-3': 'ISO-8859-3',
'iso_8859-3:1988': 'ISO-8859-3',
l3: 'ISO-8859-3',
latin3: 'ISO-8859-3',
csisolatin4: 'ISO-8859-4',
'iso-8859-4': 'ISO-8859-4',
'iso-ir-110': 'ISO-8859-4',
'iso8859-4': 'ISO-8859-4',
iso88594: 'ISO-8859-4',
'iso_8859-4': 'ISO-8859-4',
'iso_8859-4:1988': 'ISO-8859-4',
l4: 'ISO-8859-4',
latin4: 'ISO-8859-4',
csisolatincyrillic: 'ISO-8859-5',
cyrillic: 'ISO-8859-5',
'iso-8859-5': 'ISO-8859-5',
'iso-ir-144': 'ISO-8859-5',
'iso8859-5': 'ISO-8859-5',
iso88595: 'ISO-8859-5',
'iso_8859-5': 'ISO-8859-5',
'iso_8859-5:1988': 'ISO-8859-5',
arabic: 'ISO-8859-6',
'asmo-708': 'ISO-8859-6',
csiso88596e: 'ISO-8859-6',
csiso88596i: 'ISO-8859-6',
csisolatinarabic: 'ISO-8859-6',
'ecma-114': 'ISO-8859-6',
'iso-8859-6': 'ISO-8859-6',
'iso-8859-6-e': 'ISO-8859-6',
'iso-8859-6-i': 'ISO-8859-6',
'iso-ir-127': 'ISO-8859-6',
'iso8859-6': 'ISO-8859-6',
iso88596: 'ISO-8859-6',
'iso_8859-6': 'ISO-8859-6',
'iso_8859-6:1987': 'ISO-8859-6',
csisolatingreek: 'ISO-8859-7',
'ecma-118': 'ISO-8859-7',
elot_928: 'ISO-8859-7',
greek: 'ISO-8859-7',
greek8: 'ISO-8859-7',
'iso-8859-7': 'ISO-8859-7',
'iso-ir-126': 'ISO-8859-7',
'iso8859-7': 'ISO-8859-7',
iso88597: 'ISO-8859-7',
'iso_8859-7': 'ISO-8859-7',
'iso_8859-7:1987': 'ISO-8859-7',
sun_eu_greek: 'ISO-8859-7',
csiso88598e: 'ISO-8859-8',
csisolatinhebrew: 'ISO-8859-8',
hebrew: 'ISO-8859-8',
'iso-8859-8': 'ISO-8859-8',
'iso-8859-8-e': 'ISO-8859-8',
'iso-8859-8-i': 'ISO-8859-8',
'iso-ir-138': 'ISO-8859-8',
'iso8859-8': 'ISO-8859-8',
iso88598: 'ISO-8859-8',
'iso_8859-8': 'ISO-8859-8',
'iso_8859-8:1988': 'ISO-8859-8',
visual: 'ISO-8859-8',
csisolatin6: 'ISO-8859-10',
'iso-8859-10': 'ISO-8859-10',
'iso-ir-157': 'ISO-8859-10',
'iso8859-10': 'ISO-8859-10',
iso885910: 'ISO-8859-10',
l6: 'ISO-8859-10',
latin6: 'ISO-8859-10',
'iso-8859-13': 'ISO-8859-13',
'iso8859-13': 'ISO-8859-13',
iso885913: 'ISO-8859-13',
'iso-8859-14': 'ISO-8859-14',
'iso8859-14': 'ISO-8859-14',
iso885914: 'ISO-8859-14',
csisolatin9: 'ISO-8859-15',
'iso-8859-15': 'ISO-8859-15',
'iso8859-15': 'ISO-8859-15',
iso885915: 'ISO-8859-15',
'iso_8859-15': 'ISO-8859-15',
l9: 'ISO-8859-15',
'iso-8859-16': 'ISO-8859-16',
cskoi8r: 'KOI8-R',
koi: 'KOI8-R',
koi8: 'KOI8-R',
'koi8-r': 'KOI8-R',
koi8_r: 'KOI8-R',
'koi8-ru': 'KOI8-U',
'koi8-u': 'KOI8-U',
csmacintosh: 'macintosh',
mac: 'macintosh',
macintosh: 'macintosh',
'x-mac-roman': 'macintosh',
'dos-874': 'windows-874',
'iso-8859-11': 'windows-874',
'iso8859-11': 'windows-874',
iso885911: 'windows-874',
'tis-620': 'windows-874',
'windows-874': 'windows-874',
cp1250: 'windows-1250',
'windows-1250': 'windows-1250',
'x-cp1250': 'windows-1250',
cp1251: 'windows-1251',
'windows-1251': 'windows-1251',
'x-cp1251': 'windows-1251',
'ansi_x3.4-1968': 'windows-1252',
ascii: 'windows-1252',
cp1252: 'windows-1252',
cp819: 'windows-1252',
csisolatin1: 'windows-1252',
ibm819: 'windows-1252',
'iso-8859-1': 'windows-1252',
'iso-ir-100': 'windows-1252',
'iso8859-1': 'windows-1252',
iso88591: 'windows-1252',
'iso_8859-1': 'windows-1252',
'iso_8859-1:1987': 'windows-1252',
l1: 'windows-1252',
latin1: 'windows-1252',
'us-ascii': 'windows-1252',
'windows-1252': 'windows-1252',
'x-cp1252': 'windows-1252',
cp1253: 'windows-1253',
'windows-1253': 'windows-1253',
'x-cp1253': 'windows-1253',
cp1254: 'windows-1254',
csisolatin5: 'windows-1254',
'iso-8859-9': 'windows-1254',
'iso-ir-148': 'windows-1254',
'iso8859-9': 'windows-1254',
iso88599: 'windows-1254',
'iso_8859-9': 'windows-1254',
'iso_8859-9:1989': 'windows-1254',
l5: 'windows-1254',
latin5: 'windows-1254',
'windows-1254': 'windows-1254',
'x-cp1254': 'windows-1254',
cp1255: 'windows-1255',
'windows-1255': 'windows-1255',
'x-cp1255': 'windows-1255',
cp1256: 'windows-1256',
'windows-1256': 'windows-1256',
'x-cp1256': 'windows-1256',
cp1257: 'windows-1257',
'windows-1257': 'windows-1257',
'x-cp1257': 'windows-1257',
cp1258: 'windows-1258',
'windows-1258': 'windows-1258',
'x-cp1258': 'windows-1258',
chinese: 'GBK',
csgb2312: 'GBK',
csiso58gb231280: 'GBK',
gb2312: 'GBK',
gb_2312: 'GBK',
'gb_2312-80': 'GBK',
gbk: 'GBK',
'iso-ir-58': 'GBK',
'x-gbk': 'GBK',
gb18030: 'gb18030',
big5: 'Big5',
'big5-hkscs': 'Big5',
'cn-big5': 'Big5',
csbig5: 'Big5',
'x-x-big5': 'Big5',
cseucpkdfmtjapanese: 'EUC-JP',
'euc-jp': 'EUC-JP',
'x-euc-jp': 'EUC-JP',
csshiftjis: 'Shift_JIS',
ms932: 'Shift_JIS',
ms_kanji: 'Shift_JIS',
'shift-jis': 'Shift_JIS',
shift_jis: 'Shift_JIS',
sjis: 'Shift_JIS',
'windows-31j': 'Shift_JIS',
'x-sjis': 'Shift_JIS',
cseuckr: 'EUC-KR',
csksc56011987: 'EUC-KR',
'euc-kr': 'EUC-KR',
'iso-ir-149': 'EUC-KR',
korean: 'EUC-KR',
'ks_c_5601-1987': 'EUC-KR',
'ks_c_5601-1989': 'EUC-KR',
ksc5601: 'EUC-KR',
ksc_5601: 'EUC-KR',
'windows-949': 'EUC-KR',
'utf-16be': 'UTF-16BE',
'utf-16': 'UTF-16LE',
'utf-16le': 'UTF-16LE'
};

View File

@@ -0,0 +1,903 @@
/* eslint no-control-regex: 0, no-div-regex: 0, quotes: 0 */
'use strict';
const libcharset = require('./charset');
const libbase64 = require('libbase64');
const libqp = require('libqp');
const mimetypes = require('./mimetypes');
const STAGE_KEY = 0x1001;
const STAGE_VALUE = 0x1002;
class Libmime {
constructor(config) {
this.config = config || {};
}
/**
* Checks if a value is plaintext string (uses only printable 7bit chars)
*
* @param {String} value String to be tested
* @returns {Boolean} true if it is a plaintext string
*/
isPlainText(value) {
if (typeof value !== 'string' || /[\x00-\x08\x0b\x0c\x0e-\x1f\u0080-\uFFFF]/.test(value)) {
return false;
} else {
return true;
}
}
/**
* Checks if a multi line string containes lines longer than the selected value.
*
* Useful when detecting if a mail message needs any processing at all
* if only plaintext characters are used and lines are short, then there is
* no need to encode the values in any way. If the value is plaintext but has
* longer lines then allowed, then use format=flowed
*
* @param {Number} lineLength Max line length to check for
* @returns {Boolean} Returns true if there is at least one line longer than lineLength chars
*/
hasLongerLines(str, lineLength) {
return new RegExp('^.{' + (lineLength + 1) + ',}', 'm').test(str);
}
/**
* Decodes a string from a format=flowed soft wrapping.
*
* @param {String} str Plaintext string with format=flowed to decode
* @param {Boolean} [delSp] If true, delete leading spaces (delsp=yes)
* @return {String} Mime decoded string
*/
decodeFlowed(str, delSp) {
str = (str || '').toString();
return (
str
.split(/\r?\n/)
// remove soft linebreaks
// soft linebreaks are added after space symbols
.reduce((previousValue, currentValue) => {
if (/ $/.test(previousValue) && !/(^|\n)-- $/.test(previousValue)) {
if (delSp) {
// delsp adds space to text to be able to fold it
// these spaces can be removed once the text is unfolded
return previousValue.slice(0, -1) + currentValue;
} else {
return previousValue + currentValue;
}
} else {
return previousValue + '\n' + currentValue;
}
})
// remove whitespace stuffing
// http://tools.ietf.org/html/rfc3676#section-4.4
.replace(/^ /gm, '')
);
}
/**
* Adds soft line breaks to content marked with format=flowed to
* ensure that no line in the message is never longer than lineLength
*
* @param {String} str Plaintext string that requires wrapping
* @param {Number} [lineLength=76] Maximum length of a line
* @return {String} String with forced line breaks
*/
encodeFlowed(str, lineLength) {
lineLength = lineLength || 76;
let flowed = [];
str.split(/\r?\n/).forEach(line => {
flowed.push(
this.foldLines(
line
// space stuffing http://tools.ietf.org/html/rfc3676#section-4.2
.replace(/^( |From|>)/gim, ' $1'),
lineLength,
true
)
);
});
return flowed.join('\r\n');
}
/**
* Encodes a string or an Buffer to an UTF-8 MIME Word (rfc2047)
*
* @param {String|Buffer} data String to be encoded
* @param {String} mimeWordEncoding='Q' Encoding for the mime word, either Q or B
* @param {Number} [maxLength=0] If set, split mime words into several chunks if needed
* @return {String} Single or several mime words joined together
*/
encodeWord(data, mimeWordEncoding, maxLength) {
mimeWordEncoding = (mimeWordEncoding || 'Q').toString().toUpperCase().trim().charAt(0);
maxLength = maxLength || 0;
let encodedStr;
let toCharset = 'UTF-8';
if (maxLength && maxLength > 7 + toCharset.length) {
maxLength -= 7 + toCharset.length;
}
if (mimeWordEncoding === 'Q') {
// https://tools.ietf.org/html/rfc2047#section-5 rule (3)
encodedStr = libqp.encode(data).replace(/[^a-z0-9!*+\-/=]/gi, chr => {
let ord = chr.charCodeAt(0).toString(16).toUpperCase();
if (chr === ' ') {
return '_';
} else {
return '=' + (ord.length === 1 ? '0' + ord : ord);
}
});
} else if (mimeWordEncoding === 'B') {
encodedStr = typeof data === 'string' ? data : libbase64.encode(data);
maxLength = maxLength ? Math.max(3, ((maxLength - (maxLength % 4)) / 4) * 3) : 0;
}
if (maxLength && (mimeWordEncoding !== 'B' ? encodedStr : libbase64.encode(data)).length > maxLength) {
if (mimeWordEncoding === 'Q') {
encodedStr = this.splitMimeEncodedString(encodedStr, maxLength).join('?= =?' + toCharset + '?' + mimeWordEncoding + '?');
} else {
// RFC2047 6.3 (2) states that encoded-word must include an integral number of characters, so no chopping unicode sequences
let parts = [];
let lpart = '';
for (let i = 0, len = encodedStr.length; i < len; i++) {
let chr = encodedStr.charAt(i);
// check if we can add this character to the existing string
// without breaking byte length limit
if (Buffer.byteLength(lpart + chr) <= maxLength || i === 0) {
lpart += chr;
} else {
// we hit the length limit, so push the existing string and start over
parts.push(libbase64.encode(lpart));
lpart = chr;
}
}
if (lpart) {
parts.push(libbase64.encode(lpart));
}
if (parts.length > 1) {
encodedStr = parts.join('?= =?' + toCharset + '?' + mimeWordEncoding + '?');
} else {
encodedStr = parts.join('');
}
}
} else if (mimeWordEncoding === 'B') {
encodedStr = libbase64.encode(data);
}
return '=?' + toCharset + '?' + mimeWordEncoding + '?' + encodedStr + (encodedStr.substr(-2) === '?=' ? '' : '?=');
}
/**
* Decode a complete mime word encoded string
*
* @param {String} str Mime word encoded string
* @return {String} Decoded unicode string
*/
decodeWord(charset, encoding, str) {
// RFC2231 added language tag to the encoding
// see: https://tools.ietf.org/html/rfc2231#section-5
// this implementation silently ignores this tag
let splitPos = charset.indexOf('*');
if (splitPos >= 0) {
charset = charset.substr(0, splitPos);
}
charset = libcharset.normalizeCharset(charset);
encoding = encoding.toUpperCase();
if (encoding === 'Q') {
str = str
// remove spaces between = and hex char, this might indicate invalidly applied line splitting
.replace(/=\s+([0-9a-fA-F])/g, '=$1')
// convert all underscores to spaces
.replace(/[_\s]/g, ' ');
let buf = Buffer.from(str);
let bytes = [];
for (let i = 0, len = buf.length; i < len; i++) {
let c = buf[i];
if (i <= len - 2 && c === 0x3d /* = */) {
let c1 = this.getHex(buf[i + 1]);
let c2 = this.getHex(buf[i + 2]);
if (c1 && c2) {
let c = parseInt(c1 + c2, 16);
bytes.push(c);
i += 2;
continue;
}
}
bytes.push(c);
}
str = Buffer.from(bytes);
} else if (encoding === 'B') {
str = Buffer.concat(
str
.split('=')
.filter(s => s !== '') // filter empty string
.map(str => Buffer.from(str, 'base64'))
);
} else {
// keep as is, convert Buffer to unicode string, assume utf8
str = Buffer.from(str);
}
return libcharset.decode(str, charset);
}
/**
* Finds word sequences with non ascii text and converts these to mime words
*
* @param {String|Buffer} data String to be encoded
* @param {String} mimeWordEncoding='Q' Encoding for the mime word, either Q or B
* @param {Number} [maxLength=0] If set, split mime words into several chunks if needed
* @param {String} [fromCharset='UTF-8'] Source sharacter set
* @return {String} String with possible mime words
*/
encodeWords(data, mimeWordEncoding, maxLength, fromCharset) {
if (!fromCharset && typeof maxLength === 'string' && !maxLength.match(/^[0-9]+$/)) {
fromCharset = maxLength;
maxLength = undefined;
}
maxLength = maxLength || 0;
let decodedValue = libcharset.decode(libcharset.convert(data || '', fromCharset));
let encodedValue;
let firstMatch = decodedValue.match(/(?:^|\s)([^\s]*[\u0080-\uFFFF])/);
if (!firstMatch) {
return decodedValue;
}
let lastMatch = decodedValue.match(/([\u0080-\uFFFF][^\s]*)[^\u0080-\uFFFF]*$/);
if (!lastMatch) {
// should not happen
return decodedValue;
}
let startIndex =
firstMatch.index +
(
firstMatch[0].match(/[^\s]/) || {
index: 0
}
).index;
let endIndex = lastMatch.index + (lastMatch[1] || '').length;
encodedValue =
(startIndex ? decodedValue.substr(0, startIndex) : '') +
this.encodeWord(decodedValue.substring(startIndex, endIndex), mimeWordEncoding || 'Q', maxLength) +
(endIndex < decodedValue.length ? decodedValue.substr(endIndex) : '');
return encodedValue;
}
/**
* Decode a string that might include one or several mime words
*
* @param {String} str String including some mime words that will be encoded
* @return {String} Decoded unicode string
*/
decodeWords(str) {
return (
(str || '')
.toString()
// find base64 words that can be joined
.replace(/(=\?([^?]+)\?[Bb]\?[^?]*\?=)\s*(?==\?([^?]+)\?[Bb]\?[^?]*\?=)/g, (match, left, chLeft, chRight) => {
// only mark b64 chunks to be joined if charsets match
if (libcharset.normalizeCharset(chLeft || '') === libcharset.normalizeCharset(chRight || '')) {
// set a joiner marker
return left + '__\x00JOIN\x00__';
}
return match;
})
// find QP words that can be joined
.replace(/(=\?([^?]+)\?[Qq]\?[^?]*\?=)\s*(?==\?([^?]+)\?[Qq]\?[^?]*\?=)/g, (match, left, chLeft, chRight) => {
// only mark QP chunks to be joined if charsets match
if (libcharset.normalizeCharset(chLeft || '') === libcharset.normalizeCharset(chRight || '')) {
// set a joiner marker
return left + '__\x00JOIN\x00__';
}
return match;
})
// join base64 encoded words
.replace(/(\?=)?__\x00JOIN\x00__(=\?([^?]+)\?[QqBb]\?)?/g, '')
// remove spaces between mime encoded words
.replace(/(=\?[^?]+\?[QqBb]\?[^?]*\?=)\s+(?==\?[^?]+\?[QqBb]\?[^?]*\?=)/g, '$1')
// decode words
.replace(/=\?([\w_\-*]+)\?([QqBb])\?([^?]*)\?=/g, (m, charset, encoding, text) => this.decodeWord(charset, encoding, text))
);
}
getHex(c) {
if ((c >= 0x30 /* 0 */ && c <= 0x39) /* 9 */ || (c >= 0x61 /* a */ && c <= 0x66) /* f */ || (c >= 0x41 /* A */ && c <= 0x46) /* F */) {
return String.fromCharCode(c);
}
return false;
}
/**
* Splits a string by :
* The result is not mime word decoded, you need to do your own decoding based
* on the rules for the specific header key
*
* @param {String} headerLine Single header line, might include linebreaks as well if folded
* @return {Object} And object of {key, value}
*/
decodeHeader(headerLine) {
let line = (headerLine || '')
.toString()
.replace(/(?:\r?\n|\r)[ \t]*/g, ' ')
.trim(),
match = line.match(/^\s*([^:]+):(.*)$/),
key = ((match && match[1]) || '').trim().toLowerCase(),
value = ((match && match[2]) || '').trim();
return {
key,
value
};
}
/**
* Parses a block of header lines. Does not decode mime words as every
* header might have its own rules (eg. formatted email addresses and such)
*
* @param {String} headers Headers string
* @return {Object} An object of headers, where header keys are object keys. NB! Several values with the same key make up an Array
*/
decodeHeaders(headers) {
let lines = headers.split(/\r?\n|\r/),
headersObj = {},
header,
i,
len;
for (i = lines.length - 1; i >= 0; i--) {
if (i && lines[i].match(/^\s/)) {
lines[i - 1] += '\r\n' + lines[i];
lines.splice(i, 1);
}
}
for (i = 0, len = lines.length; i < len; i++) {
header = this.decodeHeader(lines[i]);
if (!headersObj[header.key]) {
headersObj[header.key] = [header.value];
} else {
headersObj[header.key].push(header.value);
}
}
return headersObj;
}
/**
* Joins parsed header value together as 'value; param1=value1; param2=value2'
* PS: We are following RFC 822 for the list of special characters that we need to keep in quotes.
* Refer: https://www.w3.org/Protocols/rfc1341/4_Content-Type.html
* @param {Object} structured Parsed header value
* @return {String} joined header value
*/
buildHeaderValue(structured) {
let paramsArray = [];
Object.keys(structured.params || {}).forEach(param => {
// filename might include unicode characters so it is a special case
let value = structured.params[param];
if (!this.isPlainText(value) || value.length >= 75) {
this.buildHeaderParam(param, value, 50).forEach(encodedParam => {
if (!/[\s"\\;:/=(),<>@[\]?]|^[-']|'$/.test(encodedParam.value) || encodedParam.key.substr(-1) === '*') {
paramsArray.push(encodedParam.key + '=' + encodedParam.value);
} else {
paramsArray.push(encodedParam.key + '=' + JSON.stringify(encodedParam.value));
}
});
} else if (/[\s'"\\;:/=(),<>@[\]?]|^-/.test(value)) {
paramsArray.push(param + '=' + JSON.stringify(value));
} else {
paramsArray.push(param + '=' + value);
}
});
return structured.value + (paramsArray.length ? '; ' + paramsArray.join('; ') : '');
}
/**
* Parses a header value with key=value arguments into a structured
* object.
*
* parseHeaderValue('content-type: text/plain; CHARSET='UTF-8'') ->
* {
* 'value': 'text/plain',
* 'params': {
* 'charset': 'UTF-8'
* }
* }
*
* @param {String} str Header value
* @return {Object} Header value as a parsed structure
*/
parseHeaderValue(str) {
let response = {
value: false,
params: {}
};
let key = false;
let value = '';
let stage = STAGE_VALUE;
let quote = false;
let escaped = false;
let chr;
for (let i = 0, len = str.length; i < len; i++) {
chr = str.charAt(i);
switch (stage) {
case STAGE_KEY:
if (chr === '=') {
key = value.trim().toLowerCase();
stage = STAGE_VALUE;
value = '';
break;
}
value += chr;
break;
case STAGE_VALUE:
if (escaped) {
value += chr;
} else if (chr === '\\') {
escaped = true;
continue;
} else if (quote && chr === quote) {
quote = false;
} else if (!quote && chr === '"') {
quote = chr;
} else if (!quote && chr === ';') {
if (key === false) {
response.value = value.trim();
} else {
response.params[key] = value.trim();
}
stage = STAGE_KEY;
value = '';
} else {
value += chr;
}
escaped = false;
break;
}
}
// finalize remainder
value = value.trim();
if (stage === STAGE_VALUE) {
if (key === false) {
// default value
response.value = value;
} else {
// subkey value
response.params[key] = value;
}
} else if (value) {
// treat as key without value, see emptykey:
// Header-Key: somevalue; key=value; emptykey
response.params[value.toLowerCase()] = '';
}
// handle parameter value continuations
// https://tools.ietf.org/html/rfc2231#section-3
// preprocess values
Object.keys(response.params).forEach(key => {
let actualKey;
let nr;
let value;
let match = key.match(/\*((\d+)\*?)?$/);
if (!match) {
// nothing to do here, does not seem like a continuation param
return;
}
actualKey = key.substr(0, match.index).toLowerCase();
nr = Number(match[2]) || 0;
if (!response.params[actualKey] || typeof response.params[actualKey] !== 'object') {
response.params[actualKey] = {
charset: false,
values: []
};
}
value = response.params[key];
if (nr === 0 && match[0].charAt(match[0].length - 1) === '*' && (match = value.match(/^([^']*)'[^']*'(.*)$/))) {
response.params[actualKey].charset = match[1] || 'utf-8';
value = match[2];
}
response.params[actualKey].values.push({ nr, value });
// remove the old reference
delete response.params[key];
});
// concatenate split rfc2231 strings and convert encoded strings to mime encoded words
Object.keys(response.params).forEach(key => {
let value;
if (response.params[key] && Array.isArray(response.params[key].values)) {
value = response.params[key].values
.sort((a, b) => a.nr - b.nr)
.map(val => (val && val.value) || '')
.join('');
if (response.params[key].charset) {
// convert "%AB" to "=?charset?Q?=AB?=" and then to unicode
response.params[key] = this.decodeWords(
'=?' +
response.params[key].charset +
'?Q?' +
value
// fix invalidly encoded chars
.replace(/[=?_\s]/g, s => {
let c = s.charCodeAt(0).toString(16);
if (s === ' ') {
return '_';
} else {
return '%' + (c.length < 2 ? '0' : '') + c;
}
})
// change from urlencoding to percent encoding
.replace(/%/g, '=') +
'?='
);
} else {
response.params[key] = this.decodeWords(value);
}
}
});
return response;
}
/**
* Encodes a string or an Buffer to an UTF-8 Parameter Value Continuation encoding (rfc2231)
* Useful for splitting long parameter values.
*
* For example
* title="unicode string"
* becomes
* title*0*=utf-8''unicode
* title*1*=%20string
*
* @param {String|Buffer} data String to be encoded
* @param {Number} [maxLength=50] Max length for generated chunks
* @param {String} [fromCharset='UTF-8'] Source sharacter set
* @return {Array} A list of encoded keys and headers
*/
buildHeaderParam(key, data, maxLength, fromCharset) {
let list = [];
let encodedStr = typeof data === 'string' ? data : this.decode(data, fromCharset);
let encodedStrArr;
let chr, ord;
let line;
let startPos = 0;
let isEncoded = false;
let i, len;
maxLength = maxLength || 50;
// process ascii only text
if (this.isPlainText(data)) {
// check if conversion is even needed
if (encodedStr.length <= maxLength) {
return [
{
key,
value: encodedStr
}
];
}
encodedStr = encodedStr.replace(new RegExp('.{' + maxLength + '}', 'g'), str => {
list.push({
line: str
});
return '';
});
if (encodedStr) {
list.push({
line: encodedStr
});
}
} else {
if (/[\uD800-\uDBFF]/.test(encodedStr)) {
// string containts surrogate pairs, so normalize it to an array of bytes
encodedStrArr = [];
for (i = 0, len = encodedStr.length; i < len; i++) {
chr = encodedStr.charAt(i);
ord = chr.charCodeAt(0);
if (ord >= 0xd800 && ord <= 0xdbff && i < len - 1) {
chr += encodedStr.charAt(i + 1);
encodedStrArr.push(chr);
i++;
} else {
encodedStrArr.push(chr);
}
}
encodedStr = encodedStrArr;
}
// first line includes the charset and language info and needs to be encoded
// even if it does not contain any unicode characters
line = "utf-8''";
isEncoded = true;
startPos = 0;
// process text with unicode or special chars
for (i = 0, len = encodedStr.length; i < len; i++) {
chr = encodedStr[i];
if (isEncoded) {
chr = this.safeEncodeURIComponent(chr);
} else {
// try to urlencode current char
chr = chr === ' ' ? chr : this.safeEncodeURIComponent(chr);
// By default it is not required to encode a line, the need
// only appears when the string contains unicode or special chars
// in this case we start processing the line over and encode all chars
if (chr !== encodedStr[i]) {
// Check if it is even possible to add the encoded char to the line
// If not, there is no reason to use this line, just push it to the list
// and start a new line with the char that needs encoding
if ((this.safeEncodeURIComponent(line) + chr).length >= maxLength) {
list.push({
line,
encoded: isEncoded
});
line = '';
startPos = i - 1;
} else {
isEncoded = true;
i = startPos;
line = '';
continue;
}
}
}
// if the line is already too long, push it to the list and start a new one
if ((line + chr).length >= maxLength) {
list.push({
line,
encoded: isEncoded
});
line = chr = encodedStr[i] === ' ' ? ' ' : this.safeEncodeURIComponent(encodedStr[i]);
if (chr === encodedStr[i]) {
isEncoded = false;
startPos = i - 1;
} else {
isEncoded = true;
}
} else {
line += chr;
}
}
if (line) {
list.push({
line,
encoded: isEncoded
});
}
}
return list.map((item, i) => ({
// encoded lines: {name}*{part}*
// unencoded lines: {name}*{part}
// if any line needs to be encoded then the first line (part==0) is always encoded
key: key + '*' + i + (item.encoded ? '*' : ''),
value: item.line
}));
}
/**
* Returns file extension for a content type string. If no suitable extensions
* are found, 'bin' is used as the default extension
*
* @param {String} mimeType Content type to be checked for
* @return {String} File extension
*/
detectExtension(mimeType) {
mimeType = (mimeType || '').toString().toLowerCase().replace(/\s/g, '');
if (!(mimeType in mimetypes.list)) {
return 'bin';
}
if (typeof mimetypes.list[mimeType] === 'string') {
return mimetypes.list[mimeType];
}
let mimeParts = mimeType.split('/');
// search for name match
for (let i = 0, len = mimetypes.list[mimeType].length; i < len; i++) {
if (mimeParts[1] === mimetypes.list[mimeType][i]) {
return mimetypes.list[mimeType][i];
}
}
// use the first one
return mimetypes.list[mimeType][0] !== '*' ? mimetypes.list[mimeType][0] : 'bin';
}
/**
* Returns content type for a file extension. If no suitable content types
* are found, 'application/octet-stream' is used as the default content type
*
* @param {String} extension Extension to be checked for
* @return {String} File extension
*/
detectMimeType(extension) {
extension = (extension || '').toString().toLowerCase().replace(/\s/g, '').replace(/^\./g, '').split('.').pop();
if (!(extension in mimetypes.extensions)) {
return 'application/octet-stream';
}
if (typeof mimetypes.extensions[extension] === 'string') {
return mimetypes.extensions[extension];
}
let mimeParts;
// search for name match
for (let i = 0, len = mimetypes.extensions[extension].length; i < len; i++) {
mimeParts = mimetypes.extensions[extension][i].split('/');
if (mimeParts[1] === extension) {
return mimetypes.extensions[extension][i];
}
}
// use the first one
return mimetypes.extensions[extension][0];
}
/**
* Folds long lines, useful for folding header lines (afterSpace=false) and
* flowed text (afterSpace=true)
*
* @param {String} str String to be folded
* @param {Number} [lineLength=76] Maximum length of a line
* @param {Boolean} afterSpace If true, leave a space in th end of a line
* @return {String} String with folded lines
*/
foldLines(str, lineLength, afterSpace) {
str = (str || '').toString();
lineLength = lineLength || 76;
let pos = 0,
len = str.length,
result = '',
line,
match;
while (pos < len) {
line = str.substr(pos, lineLength);
if (line.length < lineLength) {
result += line;
break;
}
if ((match = line.match(/^[^\n\r]*(\r?\n|\r)/))) {
line = match[0];
result += line;
pos += line.length;
continue;
} else if ((match = line.match(/(\s+)[^\s]*$/)) && match[0].length - (afterSpace ? (match[1] || '').length : 0) < line.length) {
line = line.substr(0, line.length - (match[0].length - (afterSpace ? (match[1] || '').length : 0)));
} else if ((match = str.substr(pos + line.length).match(/^[^\s]+(\s*)/))) {
line = line + match[0].substr(0, match[0].length - (!afterSpace ? (match[1] || '').length : 0));
}
result += line;
pos += line.length;
if (pos < len) {
result += '\r\n';
}
}
return result;
}
/**
* Splits a mime encoded string. Needed for dividing mime words into smaller chunks
*
* @param {String} str Mime encoded string to be split up
* @param {Number} maxlen Maximum length of characters for one part (minimum 12)
* @return {Array} Split string
*/
splitMimeEncodedString(str, maxlen) {
let curLine,
match,
chr,
done,
lines = [];
// require at least 12 symbols to fit possible 4 octet UTF-8 sequences
maxlen = Math.max(maxlen || 0, 12);
while (str.length) {
curLine = str.substr(0, maxlen);
// move incomplete escaped char back to main
if ((match = curLine.match(/[=][0-9A-F]?$/i))) {
curLine = curLine.substr(0, match.index);
}
done = false;
while (!done) {
done = true;
// check if not middle of a unicode char sequence
if ((match = str.substr(curLine.length).match(/^[=]([0-9A-F]{2})/i))) {
chr = parseInt(match[1], 16);
// invalid sequence, move one char back anc recheck
if (chr < 0xc2 && chr > 0x7f) {
curLine = curLine.substr(0, curLine.length - 3);
done = false;
}
}
}
if (curLine.length) {
lines.push(curLine);
}
str = str.substr(curLine.length);
}
return lines;
}
encodeURICharComponent(chr) {
let res = '';
let ord = chr.charCodeAt(0).toString(16).toUpperCase();
if (ord.length % 2) {
ord = '0' + ord;
}
if (ord.length > 2) {
for (let i = 0, len = ord.length / 2; i < len; i++) {
res += '%' + ord.substr(i, 2);
}
} else {
res += '%' + ord;
}
return res;
}
safeEncodeURIComponent(str) {
str = (str || '').toString();
try {
// might throw if we try to encode invalid sequences, eg. partial emoji
str = encodeURIComponent(str);
} catch (E) {
// should never run
return str.replace(/[^\x00-\x1F *'()<>@,;:\\"[\]?=\u007F-\uFFFF]+/g, '');
}
// ensure chars that are not handled by encodeURICompent are converted as well
return str.replace(/[\x00-\x1F *'()<>@,;:\\"[\]?=\u007F-\uFFFF]/g, chr => this.encodeURICharComponent(chr));
}
}
module.exports = new Libmime();
module.exports.Libmime = Libmime;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,37 @@
{
"name": "libmime",
"description": "Encode and decode quoted printable and base64 strings",
"version": "5.2.0",
"main": "lib/libmime",
"homepage": "https://github.com/andris9/libmime",
"repository": {
"type": "git",
"url": "git://github.com/andris9/libmime.git"
},
"license": "MIT",
"keywords": [
"MIME",
"Base64",
"Quoted-Printable"
],
"author": "Andris Reinman <andris@kreata.ee>",
"scripts": {
"test": "grunt"
},
"dependencies": {
"encoding-japanese": "2.0.0",
"iconv-lite": "0.6.3",
"libbase64": "1.2.1",
"libqp": "2.0.1"
},
"devDependencies": {
"chai": "4.3.7",
"eslint-config-nodemailer": "1.2.0",
"eslint-config-prettier": "8.5.0",
"grunt": "1.5.3",
"grunt-cli": "1.4.3",
"grunt-eslint": "24.0.1",
"grunt-mocha-test": "0.13.3",
"mocha": "10.1.0"
}
}

37
test/imaps/node_modules/mailsplit/package.json generated vendored Normal file
View File

@@ -0,0 +1,37 @@
{
"name": "mailsplit",
"version": "5.4.0",
"description": "Split email messages into an object stream",
"main": "index.js",
"directories": {
"test": "test"
},
"scripts": {
"test": "grunt"
},
"author": "Andris Reinman",
"license": "(MIT OR EUPL-1.1+)",
"dependencies": {
"libbase64": "1.2.1",
"libmime": "5.2.0",
"libqp": "2.0.1"
},
"devDependencies": {
"eslint": "8.29.0",
"eslint-config-nodemailer": "1.2.0",
"eslint-config-prettier": "8.5.0",
"grunt": "1.5.3",
"grunt-cli": "1.4.3",
"grunt-contrib-nodeunit": "4.0.0",
"grunt-eslint": "24.0.1",
"random-message": "1.1.0"
},
"files": [
"lib",
"index.js"
],
"repository": {
"type": "git",
"url": "https://github.com/andris9/mailsplit.git"
}
}