Mr. D 是个好典型的理工科男生,有那么点闷,那么点程序化,同时又如此大男子气概,如此照顾我们。跟他一起玩很轻松,很简单,很随意。除了听他讲paper 和coding的时候,我们不得不聆听,其他时间里他总是被欺负的那一个。比如可以让他等足足四十多分钟而告诉他只是上楼去放一下包包;比如约好十点半出去玩却要下午两点才从房间懒懒散散地出门;比如他表达不清楚的时候我们就一顿痛批;比如我们赖着他一起吃在HK最最最大陆的美味——烤鱼,每次都要香麻辣,整一条;比如他很大方地说等他在日本安顿下来,就报销路费让我们去玩,而我真的把这句话当真记下了。欺负他之不留情面,就如同想念他之真真切切。Mr. D,等你的好消息。一路平安,要找到可以贤妻良母的女孩子是最适合你的,一定要抱着期待和感恩。
脚踩两只船,最后掉在水里的是我自己,承受各种心里压力和打击。但是苦痛和煎熬终究不会白费。准备**GRE**的时候扫清长难句障碍,啃完红宝书和要命3000**,甚至最疯的时候背韦氏字典,当然,刚到C**我就挺不住了。读字典其实是一个很好的方法,我发现他可以弥补很多我们匆忙备考的知识缺陷,比如词性、用法、多义、各种比喻和引申,并且有效地将我们汉英或英汉字典的翻译思维转变成英英的方式去理解每一个单词和例句。(注:我认为除了韦氏和牛津、朗文等字典上的例句,其他单词书上的就都不要看了吧,只会帮你学成地道的chinglish**)当时备考时间有限,我不得不放弃。建议有时间经历的童鞋可以至少坚持读完一本为老GRE**编写的韦氏字典,我现在就得要常翻翻。总之,我的托福词汇关算是用GRE**给过了,有时候看big bang theory**,从谢耳朵小盆友嘴里闪过一串自己能认识的GRE**单词,那感觉,妙不可言。会当凌绝顶,一览众山小,这是我最大的收获。当你的屁股在板凳上钉住,当你的眼睛与一屏幕地GRE**单词放电,IBT**阅读的词汇需求再也不会吓到你。**
脚踩两只船,最后掉在水里的是我自己,承受各种心里压力和打击。但是苦痛和煎熬终究不会白费。准备GRE的时候扫清长难句障碍,啃完红宝书和要命3000,甚至最疯的时候背韦氏字典,当然,刚到C我就挺不住了。读字典其实是一个很好的方法,我发现他可以弥补很多我们匆忙备考的知识缺陷,比如词性、用法、多义、各种比喻和引申,并且有效地将我们汉英或英汉字典的翻译思维转变成英英的方式去理解每一个单词和例句。(注:我认为除了韦氏和牛津、朗文等字典上的例句,其他单词书上的就都不要看了吧,只会帮你学成地道的chinglish)当时备考时间有限,我不得不放弃。建议有时间经历的童鞋可以至少坚持读完一本为老GRE编写的韦氏字典,我现在就得要常翻翻。总之,我的托福词汇关算是用GRE给过了,有时候看big bang theory,从谢耳朵小盆友嘴里闪过一串自己能认识的GRE单词,那感觉,妙不可言。会当凌绝顶,一览众山小,这是我最大的收获。当你的屁股在板凳上钉住,当你的眼睛与一屏幕地GRE单词放电,IBT****阅读的词汇需求再也不会吓到你。
已经是2012年春节前,大概1月15号左右,看美剧我都快要看吐了的时候才醒悟过来。又一年春节不回家,滞留北京;拖着病腿,吃着药还冲咖啡;男朋友为了照顾我也不能回家。这一切都是为了什么呢?想起一句话,The moment when you think about giving up, think of the reason why you held on so long. 这时我才重拾书本,认真备考,距离IBT已经不足三周,距离雅思也不足四周。
再说做题,简而言之,所有选择题都当做填空题去做,看到题目就给自己一个答案,然后再去选择,这样可以有效避免被迷惑,失去主心骨。切记,你越主动,ETS**就越被动。具体说,第一,见到词汇题,秒杀;实在不认识的才需要回原文去赌一把它是什么意思,然后再从选项中选择,沾边就对,不要多想;如果原词和选项很多词都不认识,没什么技巧,只是不要浪费银子了,回去背单词。第二,插入句子题,ETS**出题都比较规范,所插句子必定有前后两部分的名词(也就是主干啊)可以与文中相连接,或者有个短语,如in either case**等可以做提示,好像拼图一样,我们只是补入一块让它更流畅。第三,总结主旨题,这是我最弱项,弱在我常常凭感觉回答,大错特错,应该先剔除描写错误的选项,再剔除描写太细致的选项,如果纠结就回到原文,不是把纠结项拿回原文去对应啊,是自己去把握一下全文结构,因为本题只选三个选项,所以你回答自己,这篇文章在说什么?只许回答三块,这三块基本就是答案了。第四,作者为什么提A**,答案就在A**所在的本句或者前一句。第五,各种EXCEPT**题目和具体问题具体对待的那些题目,先定位到本句,再纵观前后共三句,还找不到答案的就看该句所在的段的主旨,还找不到,如此够变态的话,直接往全文主旨上猜。此时要注意的是时间,不要恋战太久。
再说做题,简而言之,所有选择题都当做填空题去做,看到题目就给自己一个答案,然后再去选择,这样可以有效避免被迷惑,失去主心骨。切记,你越主动,ETS就越被动。具体说,第一,见到词汇题,秒杀;实在不认识的才需要回原文去赌一把它是什么意思,然后再从选项中选择,沾边就对,不要多想;如果原词和选项很多词都不认识,没什么技巧,只是不要浪费银子了,回去背单词。第二,插入句子题,ETS出题都比较规范,所插句子必定有前后两部分的名词(也就是主干啊)可以与文中相连接,或者有个短语,如in either case等可以做提示,好像拼图一样,我们只是补入一块让它更流畅。第三,总结主旨题,这是我最弱项,弱在我常常凭感觉回答,大错特错,应该先剔除描写错误的选项,再剔除描写太细致的选项,如果纠结就回到原文,不是把纠结项拿回原文去对应啊,是自己去把握一下全文结构,因为本题只选三个选项,所以你回答自己,这篇文章在说什么?只许回答三块,这三块基本就是答案了。第四,作者为什么提A,答案就在A所在的本句或者前一句。第五,各种EXCEPT题目和具体问题具体对待的那些题目,先定位到本句,再纵观前后共三句,还找不到答案的就看该句所在的段的主旨,还找不到,如此够变态的话,直接往全文主旨上猜。此时要注意的是时间,不要恋战太久。
@@ -116,5 +122,6 @@
阿甘妈妈那盒巧克力,我也一直捧着,充满期待,无论拿出来的下一颗是什么,很快就会有结果了。我也还在养伤和术后复健中,期待一切都好。God bless us.
-It’s not the end until you give up.这是我自己说的,送给耐心看到这里的孩纸们。心怀感恩,也盼望能对不相识的你和他或她有些许帮助。我的名字,叫,耐心。
-
无论示爱还是投稿、递交申请或者简历,摇头的速度似乎总是比点头来得快。曾经在哪里见过科学研究是说点头更容易呢?在还胆敢称自己是25岁的时候,我们会有多么害怕遭到拒绝?有多么害怕陷入迷茫?有多么害怕未来不会是自己曾梦想过的画面?然而这样的害怕,可以有一个多久的时限? 最近我常常在想,我们的记忆有多长、我们的生命有多长。某种记不得名字的鱼类,听说它的记忆是三秒,在某些惶恐和灰色的时光里,我有多么羡慕它。然而,当真实地回到生活中——远离痛苦或快乐的幻觉——却又生生地感慨生命的短暂。在这短短的几十年中,好吧,假设一个100年的距离是我们的一生(如果带着呼吸机的日子也算作内),25岁在这个四分之一点上,它又能丈量多远?身处故纸堆的另一个快乐是任意地让自己回到历史上任何一个时间、任何一个地点、任何一群人当中去。那一个25岁的自己,它又能算作什么? 有些情绪是来自于生理期、有些是因为24小时内收到一封拒信(如果是接收函一般要好等个把月甚至一年)、还有些情绪是由于重看了一遍《欲望都市》(the sex and the city) 2008年电影版。这群女人从二十多岁穿着多于普拉达的一身走过四五十岁,从曼哈顿再回到曼哈顿,也陪我度过曾经的大学。如今看来,曾让我感到炫目的或者说偶尔会羡慕的标签似乎减少了。无论是她们的独立、勇敢,还是电视电影里除了爱情和性就一副不食人间烟火的“城里”生活,对于现在的我,都不重要。五年后,我重看她们,却在分分秒秒地拷问自己,“你有什么可失去的?”就像女主人公说,女孩们都是二十多岁来到纽约,然后为名牌和爱情,一路打拼,跨过四十岁就来到该付酒钱的阶段。借着之前的拒信,我便不合逻辑地扯到——25岁的我有什么可失去的?要如何才能积攒一些可以让将来那个40岁的自己可失去的东西? 眼下我回答不了这个问题,正如我们其实经常不断地抛出、然后再撇下一些问题在生活中、在学习中、在关系中一样。这是我一个人住的第一年,并不希望像正在女孩子们当中流行很广的“第五年”那样似乎要标榜点女孩的成长或者之类的东西。只是觉得,25岁,一个人住第一年,真的没什么可失去的。此刻我有个决定,虽然是拒信,一个回复也是要给的。感谢对方写我一封拒信的时间,感谢它给我一个再次启程的地方,趁着25岁还来得及的时候。
无论示爱还是投稿、递交申请或者简历,摇头的速度似乎总是比点头来得快。曾经在哪里见过科学研究是说点头更容易呢?在还胆敢称自己是25岁的时候,我们会有多么害怕遭到拒绝?有多么害怕陷入迷茫?有多么害怕未来不会是自己曾梦想过的画面?然而这样的害怕,可以有一个多久的时限? 最近我常常在想,我们的记忆有多长、我们的生命有多长。某种记不得名字的鱼类,听说它的记忆是三秒,在某些惶恐和灰色的时光里,我有多么羡慕它。然而,当真实地回到生活中——远离痛苦或快乐的幻觉——却又生生地感慨生命的短暂。在这短短的几十年中,好吧,假设一个100年的距离是我们的一生(如果带着呼吸机的日子也算作内),25岁在这个四分之一点上,它又能丈量多远?身处故纸堆的另一个快乐是任意地让自己回到历史上任何一个时间、任何一个地点、任何一群人当中去。那一个25岁的自己,它又能算作什么? 有些情绪是来自于生理期、有些是因为24小时内收到一封拒信(如果是接收函一般要好等个把月甚至一年)、还有些情绪是由于重看了一遍《欲望都市》(the sex and the city) 2008年电影版。这群女人从二十多岁穿着多于普拉达的一身走过四五十岁,从曼哈顿再回到曼哈顿,也陪我度过曾经的大学。如今看来,曾让我感到炫目的或者说偶尔会羡慕的标签似乎减少了。无论是她们的独立、勇敢,还是电视电影里除了爱情和性就一副不食人间烟火的“城里”生活,对于现在的我,都不重要。五年后,我重看她们,却在分分秒秒地拷问自己,“你有什么可失去的?”就像女主人公说,女孩们都是二十多岁来到纽约,然后为名牌和爱情,一路打拼,跨过四十岁就来到该付酒钱的阶段。借着之前的拒信,我便不合逻辑地扯到——25岁的我有什么可失去的?要如何才能积攒一些可以让将来那个40岁的自己可失去的东西? 眼下我回答不了这个问题,正如我们其实经常不断地抛出、然后再撇下一些问题在生活中、在学习中、在关系中一样。这是我一个人住的第一年,并不希望像正在女孩子们当中流行很广的“第五年”那样似乎要标榜点女孩的成长或者之类的东西。只是觉得,25岁,一个人住第一年,真的没什么可失去的。此刻我有个决定,虽然是拒信,一个回复也是要给的。感谢对方写我一封拒信的时间,感谢它给我一个再次启程的地方,趁着25岁还来得及的时候。
\ No newline at end of file
diff --git a/docs/2013/12/20/Merry Christmas and Happy New Year/index.html b/docs/2013/12/20/Merry Christmas and Happy New Year/index.html
index bf555698b..30af16334 100644
--- a/docs/2013/12/20/Merry Christmas and Happy New Year/index.html
+++ b/docs/2013/12/20/Merry Christmas and Happy New Year/index.html
@@ -1,13 +1,18 @@
-Merry Christmas and Happy New Year | Beendless ~ 快节奏,慢生活,无止境
+Merry Christmas and Happy New Year | Beendless ~ 快节奏,慢生活,无止境
-
情人节的故事可以有各种版本,或许也可以像人类学家Arnold van Geneep研究仪式过程那样,把它们分为三个阶段:爱前、爱中和爱后。我们对爱有期待,有选择和纠结,也肯定会有疼痛。无论是跟谁、哪一种爱、在哪个阶段,爱得猛烈就必然有伤害,爱得温柔也逃不过那伤害。出现在自己生命里的人,总是有来有往;就如同自己也以不同方式出现在别人不同阶段的生命里一样。就让故事的最高潮亢奋和最撕心裂肺都融化在清晨的阳光里吧。暖暖的,哪怕就那么一会儿,也足够美好。
\ No newline at end of file
diff --git "a/docs/2014/03/23/\345\216\273\345\261\216 vs. \345\216\273\346\255\273/index.html" "b/docs/2014/03/23/\345\216\273\345\261\216 vs. \345\216\273\346\255\273/index.html"
index afe7cffaa..efa0d0a83 100644
--- "a/docs/2014/03/23/\345\216\273\345\261\216 vs. \345\216\273\346\255\273/index.html"
+++ "b/docs/2014/03/23/\345\216\273\345\261\216 vs. \345\216\273\346\255\273/index.html"
@@ -1,15 +1,20 @@
-去屎 vs. 去死 | Beendless ~ 快节奏,慢生活,无止境
+去屎 vs. 去死 | Beendless ~ 快节奏,慢生活,无止境
-
昨晚临睡前看完一本很短的小说,比利时作家 Dimitri Verhulst 十多年前的作品,已被译为英文和中文的畅销书(De helaasheid der dingen; The misfortunates; 《废柴家族》)。这是一本被称作“半自传式”的小说,以一个十多岁少年的视角冷眼旁观自己的家庭——主要是爸爸和叔叔们,一群只能跟老母亲挤在老房子里蹭救济金的醉汉们。他们生活在连小小的比利时地图都可将其遗忘的小镇上(译者称作宝旮旯),每日烂醉、满口粗暴。略有腐臭的黄油和拉屎的味道不相上下,男孩比谁的膀胱能尿得更高更远,而女孩同样能依靠这种方式涓涓地引来一群小鱼。小说里浸透了琥珀色的啤酒,随处可见阴毛,呕吐物更是堆在满纸。可是这个家庭的温暖在兄弟们、叔侄们、父子间默默地储存着。这样的温暖是可以给敌人一记左勾拳的能量。说“敌人”这个词或许太过了,只是一种愤怒,一种可被称为无产阶级的一家人在面对小资产阶级的谩骂和指责时,爆发出来的本能吧。同样,高唱着生理反应的酒歌,这个家庭的热血也浇灭了民俗学家的学术和研究,眼见着后者显得如何虚伪和苍白。兄弟们、小清新的公主表妹、临终前的痴呆奶奶都能唱《采木耳之歌》——“奇迹时代不停息,眼见天干又物燥,我的木耳湿又润。鸡叫过一遍,鸡叫过两遍,我感觉爽翻天。”(引自译文)而偏偏翘首企盼这段歌词的民俗学家们就愣是没机会听得到。
昨晚临睡前看完一本很短的小说,比利时作家 Dimitri Verhulst 十多年前的作品,已被译为英文和中文的畅销书(De helaasheid der dingen; The misfortunates; 《废柴家族》)。这是一本被称作“半自传式”的小说,以一个十多岁少年的视角冷眼旁观自己的家庭——主要是爸爸和叔叔们,一群只能跟老母亲挤在老房子里蹭救济金的醉汉们。他们生活在连小小的比利时地图都可将其遗忘的小镇上(译者称作宝旮旯),每日烂醉、满口粗暴。略有腐臭的黄油和拉屎的味道不相上下,男孩比谁的膀胱能尿得更高更远,而女孩同样能依靠这种方式涓涓地引来一群小鱼。小说里浸透了琥珀色的啤酒,随处可见阴毛,呕吐物更是堆在满纸。可是这个家庭的温暖在兄弟们、叔侄们、父子间默默地储存着。这样的温暖是可以给敌人一记左勾拳的能量。说“敌人”这个词或许太过了,只是一种愤怒,一种可被称为无产阶级的一家人在面对小资产阶级的谩骂和指责时,爆发出来的本能吧。同样,高唱着生理反应的酒歌,这个家庭的热血也浇灭了民俗学家的学术和研究,眼见着后者显得如何虚伪和苍白。兄弟们、小清新的公主表妹、临终前的痴呆奶奶都能唱《采木耳之歌》——“奇迹时代不停息,眼见天干又物燥,我的木耳湿又润。鸡叫过一遍,鸡叫过两遍,我感觉爽翻天。”(引自译文)而偏偏翘首企盼这段歌词的民俗学家们就愣是没机会听得到。
\ No newline at end of file
diff --git a/docs/2015/09/26/Set up SSL for website with Nginx and StartSSL/index.html b/docs/2015/09/26/Set up SSL for website with Nginx and StartSSL/index.html
index 6f4e9d2f3..5c74dc6a7 100644
--- a/docs/2015/09/26/Set up SSL for website with Nginx and StartSSL/index.html
+++ b/docs/2015/09/26/Set up SSL for website with Nginx and StartSSL/index.html
@@ -1,13 +1,18 @@
-Set up SSL for website with Nginx and StartSSL | Beendless ~ 快节奏,慢生活,无止境
+Set up SSL for website with Nginx and StartSSL | Beendless ~ 快节奏,慢生活,无止境
-
If you enable HTTPS and set up the certifications correctly, which means data will not be decrypted or modified during the transportation. Today I try to enable SSL to my website. Here is what I did to make it happen:
If you enable HTTPS and set up the certifications correctly, which means data will not be decrypted or modified during the transportation. Today I try to enable SSL to my website. Here is what I did to make it happen:
First, you should make sure your website hosted with a dedicated IP address. Like me buy a VPS from linode. Also you should make sure your HTTP web server support SSL when you set up it. If you are using nginx, just add –with-http_ssl_module when you built it yourself (http://nginx.org/en/docs/http/ngx\_http\_ssl_module.html).
Secondly, you need to buy a certification. As we know all modern browsers will check CA, in order to recognized by the root authorities, you need to purchase one certificated through them. Even you can self-signed one to testing which will show warning to users by browser. Fortunately, there’re some authorities who supply free CA to users like startssl. It’s easy to get a free CA from them. Just sign up and follow the guidance from startssl, then you can get
location / { root /usr/share/nginx/html; index index.html index.htm; } }
After that, restart your nginx and revisit your website, you will find your website support HTTPS now. Make sure you open 443 port in your firewall configuration. If you want to forward all HTTP request to HTTPS, just add the configurations below:
/** * Initializes random centroids, using the ranges of the data * to set minimum and maximum bounds for the centroids. * You may inspect the output of this method if you need to debug * random initialization, otherwise this is an internal method. * @see getAllDimensionRanges * @see getRangeForDimension * @returns {Array} */initRandomCentroids() { const dimensionality = this.getDimensionality(); const dimensionRanges = this.getAllDimensionRanges(); const centroids = []; // We must create 'k' centroids. for (let i = 0; i < this.k; i++) { // Since each dimension has its own range, create a placeholder at first let point = []; /** * For each dimension in the data find the min/max range of that dimension, * and choose a random value that lies within that range. */ for (let dimension = 0; dimension < dimensionality; dimension++) { const {min, max} = dimensionRanges[dimension]; point[dimension] = min + (Math.random()*(max-min)); } centroids.push(point); } return centroids; }
It is based on the logistic seperation of concerns of your application (or platform) into layers. And the layers must comply with the following points:
Each layer must have a well-defined purpose (presentation layer, business layer, and so on)
@@ -61,4 +66,4 @@
Lambda Architectures
Lambda architecutres are a special pattern designed to provide a high-throughput platform that is able to process very large quantities of data both in real time and in batches.
This is a solution that has a very high maintenance cost associated with it since you basically are maintaining two parallel architecture at once, which in ture need to keep a centralized repository of data in a synchronized matter.
\ No newline at end of file
diff --git a/docs/2018/12/20/Scaling-NodeJS-Apps/index.html b/docs/2018/12/20/Scaling-NodeJS-Apps/index.html
index 82b29b44d..3fccc552f 100644
--- a/docs/2018/12/20/Scaling-NodeJS-Apps/index.html
+++ b/docs/2018/12/20/Scaling-NodeJS-Apps/index.html
@@ -1,13 +1,18 @@
-Scaling NodeJS Apps -- The Need to Scale | Beendless ~ 快节奏,慢生活,无止境
+Scaling NodeJS Apps -- The Need to Scale | Beendless ~ 快节奏,慢生活,无止境
-
An increasing in incoming traffic could affect your system in different ways; we can describe these as direct or indirect.
Direct Effects
@@ -36,10 +41,8 @@
Redundancy
We have one or more components performing the same task and some form of checking logic to determine when one of them is has failed and its output needs to be ignored. It’s a very common practice for mission-critical components.
-
Triple Modular Redundancy TMR is a form of redundancy in which three systems perform the same process and their results are checked by a majority voting system that in turn produces a single output.
-
-
Forward Error Correction FEC adds redundancy into the message itself. The receiver can verify the actual data and correct a limited number of detected errors caused by noisy or unstable channels.
-
+
Triple Modular Redundancy TMR is a form of redundancy in which three systems perform the same process and their results are checked by a majority voting system that in turn produces a single output.
+
Forward Error Correction FEC adds redundancy into the message itself. The receiver can verify the actual data and correct a limited number of detected errors caused by noisy or unstable channels.
\ No newline at end of file
diff --git a/docs/2018/12/23/Started-to-go-through-convnetjs/index.html b/docs/2018/12/23/Started-to-go-through-convnetjs/index.html
index cf65b96a6..d83f930d0 100644
--- a/docs/2018/12/23/Started-to-go-through-convnetjs/index.html
+++ b/docs/2018/12/23/Started-to-go-through-convnetjs/index.html
@@ -1,12 +1,17 @@
-Started to go through convnetjs | Beendless ~ 快节奏,慢生活,无止境
+Started to go through convnetjs | Beendless ~ 快节奏,慢生活,无止境
-
After reading several books about deep learning, now I can use keras / tensorflow to train some models, but the mathmatical implementations behind the libraries are still have to follow.
After reading several books about deep learning, now I can use keras / tensorflow to train some models, but the mathmatical implementations behind the libraries are still have to follow.
Two years ago, when I played with Karpathy‘s [ConvnetJS], I was shocked by the agrighms behind that. I think now it’s time to go through his code, I started a small project to annotate ConvnetJS source code and re-write it with TypeScript. You can check my progress from ConvnetJS Source Annotation, you can also check the documentation directly.
\ No newline at end of file
diff --git a/docs/2019/01/01/Several-Important-Concetps-of-CNN/index.html b/docs/2019/01/01/Several-Important-Concetps-of-CNN/index.html
index 1698ee573..d090c60a5 100644
--- a/docs/2019/01/01/Several-Important-Concetps-of-CNN/index.html
+++ b/docs/2019/01/01/Several-Important-Concetps-of-CNN/index.html
@@ -1,13 +1,18 @@
-Several Important Concepts of CNN | Beendless ~ 快节奏,慢生活,无止境
+Several Important Concepts of CNN | Beendless ~ 快节奏,慢生活,无止境
-
A trained convolutional layer is made up of many feature detectors, called filters, which slide over an input image tensor as a moving window. This is a very powerful technique and it possesses several advantages over the flatten and classify method or deep learning.
A trained convolutional layer is made up of many feature detectors, called filters, which slide over an input image tensor as a moving window. This is a very powerful technique and it possesses several advantages over the flatten and classify method or deep learning.
Below are some notes coming from Deep Learning Quick Reference.
Convolutional Layer
During the computation between the input and each filter, we take the elementwise product across all axes. So in the end, we will still leave with a two-dimensional output.
In a convolution layer, each unit is a filter, combined with a nonlinearity.
One of the biggest headaches of using deep neural networks is that they have tons of hyperparameters that should be optimized so that the network performs optimally. Below are some notes coming from Deep Learning Quick Reference.
One of the biggest headaches of using deep neural networks is that they have tons of hyperparameters that should be optimized so that the network performs optimally. Below are some notes coming from Deep Learning Quick Reference.
Try to Find some similar solved problem.
Keep adding layers/nodes until the network gets overfitting.
-
1
The bad thing becomes a good thing to help us confirm that the network can fit the training sets perfectly at least.
-
-
+
1
The bad thing becomes a good thing to help us confirm that the network can fit the training sets perfectly at least.
Hyperparameters
Optimizer
@@ -44,4 +47,4 @@
1
Ultimately, hyperparameter search is an economics problem, and the first part of any hyperparameter search should be a consideration for your budget of computation time, and personal time, in attempting to isolate the best hyperparameter configuration.
Ultimately, hyperparameter search is an economics problem, and the first part of any hyperparameter search should be a consideration for your budget of computation time, and personal time, in attempting to isolate the best hyperparameter configuration.
Python introduced async/await syntax from Python3.5. it makes your code non-blocking and speedy. Developers can use it to build a high-performance / NIO web services like NodeJS. Most of the Python web developers are familiar with Flask. But unfortunately flask has no plan to support the async request headers. Sanic is a Flask-like webserver that’s written to go fast. It was inspired by uvloop.
Python introduced async/await syntax from Python3.5. it makes your code non-blocking and speedy. Developers can use it to build a high-performance / NIO web services like NodeJS. Most of the Python web developers are familiar with Flask. But unfortunately flask has no plan to support the async request headers. Sanic is a Flask-like webserver that’s written to go fast. It was inspired by uvloop.
I set up a sanic boilerplate to show how to set up a sanic application. Inside of this project:
Dockerfile and docker-compose.yml are used to set up the python environments
@@ -15,4 +20,4 @@
Blueprint is used to build different parts of the applications (health-check / docs / business logic samples)
\ No newline at end of file
diff --git a/docs/2020/08/31/Loading Image to Google Colab Notebooks/index.html b/docs/2020/08/31/Loading Image to Google Colab Notebooks/index.html
index d14296de3..82e26ae93 100644
--- a/docs/2020/08/31/Loading Image to Google Colab Notebooks/index.html
+++ b/docs/2020/08/31/Loading Image to Google Colab Notebooks/index.html
@@ -1,15 +1,20 @@
-Loading Image to Google Colab Notebooks | Beendless ~ 快节奏,慢生活,无止境
+Loading Image to Google Colab Notebooks | Beendless ~ 快节奏,慢生活,无止境
-
Google Colab is one of the best place to start your Machine Learning. Sometime you may want to upload images to the notebooks from your local. Fortunately you can easily make it done throught the built-in API.
Google Colab is one of the best place to start your Machine Learning. Sometime you may want to upload images to the notebooks from your local. Fortunately you can easily make it done throught the built-in API.
1 2 3 4 5 6 7
from google.colab import files from io import BytesIO import matplotlib.pyplot as plt
uploaded_files = files.upload()
images = {fname: plt.imread(BytesIO(fbinary)) for fname, fbinary in uploaded_files.items()}
If you are using Keras, you can also read the uploaded file and convert it to a Numpy array with built-in helper functions.
1 2 3
from tensorflow.keras.preprocessing.image import load_img, img_to_array TARGET_SIZE = 256 images = {fname: img_to_array(load_img(fname, target_size=(TARGET_SIZE, TARGET_SIZE)))for fname in uploaded_files.keys()}
I’m start reading this book <Deep Learning with TensorFlow 2 and Keras> those days, and will keep posting what I learnt from the book here.
There are several regular computer vision tasks when everyone start learning CNN. Such as MINST, ImageNet etc. But when you start applying what you learned to real projects, you may find more complex categories of computer vision use cases.
Classification and localization
@@ -47,13 +52,13 @@
Concatenative TTS is where single speech voice fragments are first memorized and then recombined when the voice has to be reproduced. However, this approach does not scale because it is possible to reproduce only the memorized voice fragments, and it is not possible to reproduce new speakers or different types of audio without memorizing the fragments from the beginning.
Parametric TTS is where a model is created for storing all the characteristic features of the audio to be synthesized. Before WaveNet, the audio generated with parametric TTS was less natural than concatenative TTS. WaveNet enabled significant improvement by modeling directly the production of audio sounds, instead of using intermediate signal processing algorithms as in the past.
\ No newline at end of file
diff --git a/docs/2020/10/02/UsePureJSToAddWaterMarkForYourSite/index.html b/docs/2020/10/02/UsePureJSToAddWaterMarkForYourSite/index.html
index 62dd85d8e..80549d1b5 100644
--- a/docs/2020/10/02/UsePureJSToAddWaterMarkForYourSite/index.html
+++ b/docs/2020/10/02/UsePureJSToAddWaterMarkForYourSite/index.html
@@ -1,12 +1,17 @@
-Add WaterMark with JavaScript to Your Website | Beendless ~ 快节奏,慢生活,无止境
+Add WaterMark with JavaScript to Your Website | Beendless ~ 快节奏,慢生活,无止境
-
If you are an enterprise application developer, you may want to add watermark to your application. You can use below JS to applications like Confluence, Jira and so on. Just need to paste below JS code.
If you are an enterprise application developer, you may want to add watermark to your application. You can use below JS to applications like Confluence, Jira and so on. Just need to paste below JS code.
\ No newline at end of file
diff --git a/docs/2020/10/09/Serving Files on S3 through NodeJS/index.html b/docs/2020/10/09/Serving Files on S3 through NodeJS/index.html
index 5acb4f9bc..d2a969863 100644
--- a/docs/2020/10/09/Serving Files on S3 through NodeJS/index.html
+++ b/docs/2020/10/09/Serving Files on S3 through NodeJS/index.html
@@ -1,12 +1,17 @@
-Serving Files on S3 through NodeJS | Beendless ~ 快节奏,慢生活,无止境
+Serving Files on S3 through NodeJS | Beendless ~ 快节奏,慢生活,无止境
-
NodeJS stream is one of the most powerful modules built-in. If you need to serve files on S3 through NodeJS service, a good idea is to leverage the compatibility of stream, especially if you want to serve big files.
NodeJS stream is one of the most powerful modules built-in. If you need to serve files on S3 through NodeJS service, a good idea is to leverage the compatibility of stream, especially if you want to serve big files.
One small trick here is you need to set up correct Content-Type before sending response back to the browser. Based on AWS’s documentation, https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Request.html, we can listen the event httpHeaders and set up the correct response header information.
When you are following functional programming style guide to write JavaScript, you may find that it’s hard to deal with asynchronous since async function always return promises. So code like below will resolve the promises at same time instead of waiting for them.
When you are following functional programming style guide to write JavaScript, you may find that it’s hard to deal with asynchronous since async function always return promises. So code like below will resolve the promises at same time instead of waiting for them.
1 2 3 4 5 6
(() => { [1, 2, 3, 4].forEach(async (x) => { await AsyncFunction(x); }); console.log('Done'); // Run before the promises are resolved. })();
There are several different approaches to solve this problem.
@@ -17,4 +22,4 @@
Use Reduce to loop over async calls [promise chain]
Since async calls return promises, we can emulate forEach with reduce by starting with a resolved promise and chaining ot it the promise for each value in the array.
1 2 3 4
(async () => { await [1, 2, 3, 4].reduce((acc, item) => acc.then(() => AsyncFunction(item)), Promise.resolve()); console.log('Done'); // Run after all promises are resolved. })();
HTTP requests and HTTP responses use header fields to send information about the HTTP messages. Header fields are colon-separated name-value pairs that are separated by a carriage return (CR) and a line feed (LF). A standard set of HTTP header fields is defined in RFC 2616. There are also non-standard HTTP headers available that are widely used by the applications. Some of the non-standard HTTP headers have an X-Forwarded prefix.
HTTP requests and HTTP responses use header fields to send information about the HTTP messages. Header fields are colon-separated name-value pairs that are separated by a carriage return (CR) and a line feed (LF). A standard set of HTTP header fields is defined in RFC 2616. There are also non-standard HTTP headers available that are widely used by the applications. Some of the non-standard HTTP headers have an X-Forwarded prefix.
The X-Forwarded-Proto request header helps you identify the protocol (HTTP or HTTPS) that a client used to connect to your servers. For example, if you host your website application behind a proxy server, let’s say AWS Loadbalancer. If there’s only one layer in front of the AWS load balancer, then the X-Forwarded-Proto value could be either http or https (it depends on how client connect to the load balancer).
Usually it won’t be an issue. But if you have multiple proxy servers in front of your application, for instance, user will have to go through CDN, WAF, Load balancer to hit your application, then the value of X-Forwarded-Proto depends on how the last two layers connect to each other instead of the protocal from client. Which means if user open the website with HTTPS mode, then you will have issue to set up the secured cookie in your HTTP response.
Here is an example, you set up an applicatio with ExpressJS under HTTPS model. In front your application, you have Fastly, Imperva and AWS ALB. Now if you are using express-session to set up your user session with below configuration:
\ No newline at end of file
diff --git a/docs/2021/07/06/Understand Golang's Function Type/index.html b/docs/2021/07/06/Understand Golang's Function Type/index.html
index a9b8ab83c..c2d5327b6 100644
--- a/docs/2021/07/06/Understand Golang's Function Type/index.html
+++ b/docs/2021/07/06/Understand Golang's Function Type/index.html
@@ -1,13 +1,18 @@
-Understand Golang's Function Type | Beendless ~ 快节奏,慢生活,无止境
+Understand Golang's Function Type | Beendless ~ 快节奏,慢生活,无止境
-
Two function types are identical if they have the same number of parameters and result types, corresponding parameters and result types are identical, and either both functions have variadic or neither is. Parameter and result names are not required to match
@@ -21,4 +26,4 @@
Which means we can define a handler below
1 2 3 4 5
func handleGreeting(format string) http.HandlerFunc { return func(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, format, "World") } }
This is valid return type since the anonymous function’s signature and return type is the same as http.HandlerFunc, so we don’t need to explictly convert it. It’s the same as
-
1 2 3 4 5
func handleGreeting(format string) http.HandlerFunc { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, format, "World") }) }
Let’s take a look at a easy problem on Leetcode 704. Binary Search. Besides the brute-force O(n) solution, it’s not hard to get the O(log(n)) solution from the constrains unique and sorted in ascending order. Binary search is one of the most basic algorithms we are using, but most people couldn’t get the right code.
Let’s take a look at a easy problem on Leetcode 704. Binary Search. Besides the brute-force O(n) solution, it’s not hard to get the O(log(n)) solution from the constrains unique and sorted in ascending order. Binary search is one of the most basic algorithms we are using, but most people couldn’t get the right code.
Based on the open/close intervals, there are two different templates for Binary Search code:
Another important thing to keep in mind is that the range overflow. You may notice that when we calculate the new sub ranges above, we are using middle := left + (right-left)/2 instead of middle := (left + right)/2. So what’s the difference between those two? Mathmatically there’s no difference, but in computer world, the later one postentially can cause an overflow issue when the range of the array is too large. left + right could be larger than the biggest int.
\ No newline at end of file
diff --git a/docs/2021/08/24/Remove-Linked-List-Elements/index.html b/docs/2021/08/24/Remove-Linked-List-Elements/index.html
index a74c792c3..6a24681d3 100644
--- a/docs/2021/08/24/Remove-Linked-List-Elements/index.html
+++ b/docs/2021/08/24/Remove-Linked-List-Elements/index.html
@@ -1,13 +1,18 @@
-Remove Linked List Elements | Beendless ~ 快节奏,慢生活,无止境
+Remove Linked List Elements | Beendless ~ 快节奏,慢生活,无止境
-
Let’s take a look at a easy problem on Leetcode 203. Remove Linked List Elements. We will demonstrate how to remove elements from a linked list.
Basically there are two ways to make it done.
a. Since all nodes in the linked list have a previous node except the head node, we can use those node’s previous node to delete those node. But if we have to remove the head node, we need to use a special logic.
1 2 3 4 5 6 7 8 9 10 11 12 13
funcremoveElements(head *ListNode, val int) *ListNode { for ; head != nil && head.Val == val; { head = head.Next } for p := head; p != nil && p.Next != nil; { if p.Val == val { p.Next = p.Next.Next } else { p = p.Next } } return head }
@@ -17,4 +22,4 @@
1 2 3 4 5 6 7 8 9 10 11
funcremoveElements(head *ListNode, val int) *ListNode { virtualNode := &ListNode{0, head} for p := virtualNode; p != nil && p.Next != nil; { if p.Next.Val == val { p.Next = p.Next.Next } else { p = p.Next } } return virtualNode.Next }
\ No newline at end of file
diff --git a/docs/2021/08/30/Design-A-Linked-List-Class/index.html b/docs/2021/08/30/Design-A-Linked-List-Class/index.html
index 0e8d3c7b2..a5ca83f51 100644
--- a/docs/2021/08/30/Design-A-Linked-List-Class/index.html
+++ b/docs/2021/08/30/Design-A-Linked-List-Class/index.html
@@ -1,11 +1,16 @@
-Design A Linked List Class | Beendless ~ 快节奏,慢生活,无止境
+Design A Linked List Class | Beendless ~ 快节奏,慢生活,无止境
-
For a given linked list, it has 3 common methods: GetByIndex, AddTo(Head/Tail/ToIndex), Delele. Similar to SQL’s CURD. Let’s see how to design a linked list class. 707. Design Linked List
func(this *MyLinkedList)Get(index int)int { if index < 0 || index >= this.size { return-1 } p := this.virtualNode for ; index > 0; index-- { p = p.Next } return p.Next.Val }
func(this *MyLinkedList)AddAtTail(val int) { p := this.virtualNode for ; p.Next != nil; p = p.Next {} p.Next = &Node{val, nil} this.size++ }
func(this *MyLinkedList)AddAtIndex(index int, val int) { if index > this.size { return } if index < 0 { index = 0 } p := this.virtualNode for ; index > 0; index-- { p = p.Next } node := &Node{val, p.Next} p.Next = node this.size++ }
func(this *MyLinkedList)DeleteAtIndex(index int) { if index < 0 || index >= this.size { return } p := this.virtualNode for ; index > 0; index-- { p = p.Next } p.Next = p.Next.Next this.size-- }
For a given linked list, it has 3 common methods: GetByIndex, AddTo(Head/Tail/ToIndex), Delele. Similar to SQL’s CURD. Let’s see how to design a linked list class. 707. Design Linked List
func(this *MyLinkedList)Get(index int)int { if index < 0 || index >= this.size { return-1 } p := this.virtualNode for ; index > 0; index-- { p = p.Next } return p.Next.Val }
func(this *MyLinkedList)AddAtTail(val int) { p := this.virtualNode for ; p.Next != nil; p = p.Next {} p.Next = &Node{val, nil} this.size++ }
func(this *MyLinkedList)AddAtIndex(index int, val int) { if index > this.size { return } if index < 0 { index = 0 } p := this.virtualNode for ; index > 0; index-- { p = p.Next } node := &Node{val, p.Next} p.Next = node this.size++ }
func(this *MyLinkedList)DeleteAtIndex(index int) { if index < 0 || index >= this.size { return } p := this.virtualNode for ; index > 0; index-- { p = p.Next } p.Next = p.Next.Next this.size-- }
Usually when you get a problem about searching the common items between multiple strings, the brute-force solution’s time complexity is usually too high. We can use hashmap to lower the time complexity.
Usually when you get a problem about searching the common items between multiple strings, the brute-force solution’s time complexity is usually too high. We can use hashmap to lower the time complexity.
funcisAnagram(s string, t string)bool { iflen(s) != len(t) { // edge case quick solution returnfalse } cache := make(map[byte]int) for i := range s { // Note: in Golang, when using range to iterate a string, you will get rune instead of byte if _, ok := cache[s[i]]; ok { cache[s[i]]++ } else { cache[s[i]] = 1 } } for i := range t { if _, ok := cache[t[i]]; ok { cache[t[i]]-- if cache[t[i]] < 0 { returnfalse } } else { returnfalse } } returntrue }
@@ -41,4 +46,4 @@
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funcfourSumCount(nums1 []int, nums2 []int, nums3 []int, nums4 []int)int { result := 0 cache := make(map[int]int) // Since we can calculate the duplicated result, map value needs to be an integer for _, i := range nums1 { for _, j := range nums2 { cache[i + j]++ } } for _, i := range nums3 { for _, j := range nums4 { result += cache[-i-j] } } return result }
Since we need to detect if it loops endlessly in a circle, it’s better to use a hashmap (set).
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funcisHappy(n int)bool { cache := make(map[int]bool) isHappyNumber := func(n int)int { s := 0 for n > 0 { t := n % 10 s += t * 5 n = n / 10 } return s }
// Simulate a set by flaging the calculated number before to detect the circle for n != 1 && !cache[n] { n, cache[n] = isHappyNumber(n), true }
funcisHappy(n int)bool { cache := make(map[int]bool) isHappyNumber := func(n int)int { s := 0 for n > 0 { t := n % 10 s += t * 5 n = n / 10 } return s }
// Simulate a set by flaging the calculated number before to detect the circle for n != 1 && !cache[n] { n, cache[n] = isHappyNumber(n), true }
Let’s take a look at a easy problem on Leetcode 27. Remove Element. We will demonstrate how to remove an element from an array without allocating extra space for another array.
@@ -17,7 +22,7 @@
Squares of a Sorted Array
-
Here is another problem on Leetcode 977. Squares of a Sorted Array. The straight forward solution will be calculate the squares of the given array with an O(n) loop and then use fast sort to get a result O(). So the complexity will be O(n + ). Let’s review the sorted array again. For the squares of the given array, the maximum squred number can only exist on either left end or right end. It means if we have two pointers start at both ends, we can continus comparing the squred number and move the pointer inward, the pointers will meet at the minumum squred number. So the time complexity will be O(n)
+
Here is another problem on Leetcode 977. Squares of a Sorted Array. The straight forward solution will be calculate the squares of the given array with an O(n) loop and then use fast sort to get a result O(). So the complexity will be O(n + ). Let’s review the sorted array again. For the squares of the given array, the maximum squred number can only exist on either left end or right end. It means if we have two pointers start at both ends, we can continus comparing the squred number and move the pointer inward, the pointers will meet at the minumum squred number. So the time complexity will be O(n)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funcsortedSquares(nums []int) []int { length := len(nums) ret := make([]int, length, length) for i, j, k := 0, length - 1, length -1; k >= 0; k-- { squred_i := nums[i] * nums[i] squred_j := nums[j] * nums[j] if squred_j > squred_i { ret[k] = squred_j j-- } else { ret[k] = squred_i i++ } } return ret }
\ No newline at end of file
diff --git a/docs/2021/09/09/String-Match-with-KMP-Algorithm/index.html b/docs/2021/09/09/String-Match-with-KMP-Algorithm/index.html
index 119b68777..59ad05f79 100644
--- a/docs/2021/09/09/String-Match-with-KMP-Algorithm/index.html
+++ b/docs/2021/09/09/String-Match-with-KMP-Algorithm/index.html
@@ -1,13 +1,18 @@
-String Match with KMP Algorithm | Beendless ~ 快节奏,慢生活,无止境
+String Match with KMP Algorithm | Beendless ~ 快节奏,慢生活,无止境
-
Search if a given string pattern (needle) is part of a target string (haystack) is a common problem. The naive approach is to use two nested loops with O(n * m) time complexity. KMP is a better way which has a better performance.
Search if a given string pattern (needle) is part of a target string (haystack) is a common problem. The naive approach is to use two nested loops with O(n * m) time complexity. KMP is a better way which has a better performance.
Two keypoints to implement KMP algorithm:
a. Generate LPS (Longest common proper Prefix and Suffix ) dictionary b. Use LPS dictionary to identify a better pointer position for next matching instead of steping back.
\ No newline at end of file
diff --git a/docs/2021/09/11/Design-a-Queue-with-Stack/index.html b/docs/2021/09/11/Design-a-Queue-with-Stack/index.html
index 7f7006770..82ed2dd0a 100644
--- a/docs/2021/09/11/Design-a-Queue-with-Stack/index.html
+++ b/docs/2021/09/11/Design-a-Queue-with-Stack/index.html
@@ -1,16 +1,21 @@
-Design a Queue with Stack | Beendless ~ 快节奏,慢生活,无止境
+Design a Queue with Stack | Beendless ~ 快节奏,慢生活,无止境
-
Since Queue is FIFO but Stack is FILO. If we need to use Stack to implement a Queue, we need to use at least two Stacks. So we use one stack which only handle Push operations, and another Stack which only handle Pop/Peek operations. And we move elements from the Pop only Stack to the other one when Pop/Peek get called. It will reverse the FILO stack elements sequence after that. So we get a FIFO sequence.
Since Queue is FIFO but Stack is FILO. If we need to use Stack to implement a Queue, we need to use at least two Stacks. So we use one stack which only handle Push operations, and another Stack which only handle Pop/Peek operations. And we move elements from the Pop only Stack to the other one when Pop/Peek get called. It will reverse the FILO stack elements sequence after that. So we get a FIFO sequence.
\ No newline at end of file
diff --git a/docs/2021/09/12/Design-a-Stack-with-Queue/index.html b/docs/2021/09/12/Design-a-Stack-with-Queue/index.html
index be57ee37c..621f18ca6 100644
--- a/docs/2021/09/12/Design-a-Stack-with-Queue/index.html
+++ b/docs/2021/09/12/Design-a-Stack-with-Queue/index.html
@@ -1,12 +1,17 @@
-Design a Stack with Queue | Beendless ~ 快节奏,慢生活,无止境
+Design a Stack with Queue | Beendless ~ 快节奏,慢生活,无止境
-
We can’t use the similar solution we did for Design a Queue with Stack. It is because unlike Stack, moving elements from one Queue to another one won’t change the sequence of elements. We have to pop out all previous elements added into the queue when adding a new element, in this way we can simulate a Stack.
We can’t use the similar solution we did for Design a Queue with Stack. It is because unlike Stack, moving elements from one Queue to another one won’t change the sequence of elements. We have to pop out all previous elements added into the queue when adding a new element, in this way we can simulate a Stack.
We can simply iterate over all items from the given string and compare the adjacent values each time with the help of stack before pushing the element in.
We can simply iterate over all items from the given string and compare the adjacent values each time with the help of stack before pushing the element in.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
funcisValid(s string)bool { bs := make([]byte, 0) bs = append(bs, s[0]) dict := map[byte]byte{ ']': '[', ')': '(', '}': '{', } for i := 1; i < len(s); i++ { if v, ok := dict[s[i]]; ok && len(bs) > 0 && v == bs[len(bs) - 1] { bs = bs[:len(bs) - 1] } else { bs = append(bs, s[i]) } } returnlen(bs) == 0 }
A Heap is a special Tree-based data structure in which the tree is a complete binary tree. Generally, there are two types of Heap: Max-Heap (root node is greater than its child nodes) and Min-Heap (root node is smaller than its child nodes).
A Heap is a special Tree-based data structure in which the tree is a complete binary tree. Generally, there are two types of Heap: Max-Heap (root node is greater than its child nodes) and Min-Heap (root node is smaller than its child nodes).
Golang’s standard library shipped with a heap container. We can also use a slice to simulate a Heap. Let’s take Max-Heap as an example.
funcinorderTraversal(root *TreeNode) []int { result := []int{} if root != nil { if root.Left != nil { result = append(result, inorderTraversal(root.Left)...) } result = append(result, root.Val) if root.Right != nil { result = append(result, inorderTraversal(root.Right)...) } } return result }
funcgetDepth(node *TreeNode)int { if node != nil { left := getDepth(node.Left) right := getDepth(node.Right) if left > right { return left + 1 } return right + 1 } return0 }
funcgetDepth(node *TreeNode)int { if node != nil { left := getDepth(node.Left) right := getDepth(node.Right) if left > right { return left + 1 } return right + 1 } return0 }
Based on the problem description, we need to find out all paths from all subtrees of the given tree which sum equals to the targetSum. There’s a pitfall as the edge case, once we got one path (which means the targetSum reaches to 0), we still need to continue the searching since the rest path below may get a total sum 0.
\ No newline at end of file
diff --git a/docs/2021/09/23/Construct-and-Update-a-Tree/index.html b/docs/2021/09/23/Construct-and-Update-a-Tree/index.html
index 03b59442f..4133b8893 100644
--- a/docs/2021/09/23/Construct-and-Update-a-Tree/index.html
+++ b/docs/2021/09/23/Construct-and-Update-a-Tree/index.html
@@ -1,13 +1,18 @@
-Construct and Update a Tree | Beendless ~ 快节奏,慢生活,无止境
+Construct and Update a Tree | Beendless ~ 快节奏,慢生活,无止境
-
Based on the preorder traversal definition for a BST, the first element in the slice is always coming from the root node, we can split the rest elements into two parts from the element which is no less than the root node for child nodes.
Based on the preorder traversal definition for a BST, the first element in the slice is always coming from the root node, we can split the rest elements into two parts from the element which is no less than the root node for child nodes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
funcbstFromPreorder(preorder []int) *TreeNode { var root *TreeNode length := len(preorder) if length > 0 { root = &TreeNode{} root.Val = preorder[0] i := 1 for i < length { if preorder[i] >= root.Val { break } i++ } root.Left = bstFromPreorder(preorder[1:i]) root.Right = bstFromPreorder(preorder[i:]) } return root }
The last element in postorder slice is the root node, with this information, we can split inorder to a left subtree and a right subtree. Since now we know the amount of nodes in the left subtree, we can go back to split the postorder list into two.
With the idea from the above one, since Suppose b is a copy of a with the value val appended to it. , it means b can only be the root node or part of right subtree based on the tree construction rule.
Since Inorder BST is a sorted slice, for greater tree, we need to get a reversed slice, which means we can still follow the Inorder traversal of the tree but right child node first.
\ No newline at end of file
diff --git a/docs/2021/09/27/Search in Trees/index.html b/docs/2021/09/27/Search in Trees/index.html
index 6ccdf89d0..f1ac73531 100644
--- a/docs/2021/09/27/Search in Trees/index.html
+++ b/docs/2021/09/27/Search in Trees/index.html
@@ -1,13 +1,18 @@
-Search in Trees | Beendless ~ 快节奏,慢生活,无止境
+Search in Trees | Beendless ~ 快节奏,慢生活,无止境
-
funcsearchBST(root *TreeNode, val int) *TreeNode { var ret *TreeNode current := root for current != nil { if current.Val == val { ret = current break } elseif current.Val > val { current = current.Left } else { current = current.Right } } return ret }
funccanPartitionKSubsets(nums []int, k int)bool { sum := 0 for _, num := range nums { sum += num } if sum % k != 0 { returnfalse } sort.Slice(nums, func(a, b int)bool { return a > b }) target := sum / k n := len(nums) if nums[n - 1] > target { returnfalse } for n > 0 && nums[n - 1] == target { n-- k-- } subsets := make([]int, k) var backtracking func(int)bool backtracking = func(index int)bool { if index == n { for _, subset := range subsets { if subset != target { returnfalse } } returntrue } for i := 0; i < k; i++ { if subsets[i] + nums[index] <= target { subsets[i] += nums[index] if backtracking(index + 1) { returntrue } subsets[i] -= nums[index] } } returnfalse } return backtracking(0) }
Another faster backtracking solution is to accumulate the successful partition.
funccanPartitionKSubsets(nums []int, k int)bool { sum := 0 for _, num := range nums { sum += num } if sum % k != 0 { returnfalse } target := sum / k n := len(nums) sort.Slice(nums, func(a, b int)bool { // Sort the slice by desc with a greedy way, so we can quickly get the target number return a > b }) if nums[n - 1] > target { returnfalse } for n > 0 && nums[n - 1] == target { n-- k-- } visited := make([]bool, n) var backtracking func(int, int, int)bool backtracking = func(index, partition, acc int)bool { if partition == k { returntrue } if acc == target { return backtracking(0, partition + 1, 0) } for i := index; i < n; i++ { if !visited[i] { visited[i] = true if backtracking(i + 1, partition, acc + nums[i]) { returntrue } visited[i] = false } } returnfalse } return backtracking(0, 0, 0) }
funccanPartitionKSubsets(nums []int, k int)bool { sum := 0 for _, num := range nums { sum += num } if sum % k != 0 { returnfalse } target := sum / k n := len(nums) sort.Slice(nums, func(a, b int)bool { // Sort the slice by desc with a greedy way, so we can quickly get the target number return a > b }) if nums[n - 1] > target { returnfalse } for n > 0 && nums[n - 1] == target { n-- k-- } visited := make([]bool, n) var backtracking func(int, int, int)bool backtracking = func(index, partition, acc int)bool { if partition == k { returntrue } if acc == target { return backtracking(0, partition + 1, 0) } for i := index; i < n; i++ { if !visited[i] { visited[i] = true if backtracking(i + 1, partition, acc + nums[i]) { returntrue } visited[i] = false } } returnfalse } return backtracking(0, 0, 0) }
Backtracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem at any point of time (by time, here, is referred to the time elapsed till reaching any level of the search tree). Usually we can consider backtracking as DFS recursively traversal.
Backtracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem at any point of time (by time, here, is referred to the time elapsed till reaching any level of the search tree). Usually we can consider backtracking as DFS recursively traversal.
Backtracking template
1 2 3 4 5 6 7 8 9 10 11
funcbacktracking(...args) { if stop_condition { // Update the result set return } for i := range nodes_in_current_layer(...args) { // Down to next layer backtracking(...args, i + 1) // Go back to the upper layer } }
Since we can convert a combination backtracking problem to a DFS traversal problem, if we don’t want to have the duplicated combination result item, it means we can’t pick duplicated nodes from the same layer of a tree. According to the backtracking template, in side of the backtracking for-loop we are handling the same layer logic (push/pop). At this point, if the given candidates is a sorted slice, we just need to compare if the previous element equals to the current element in the same layer.
Backtracking can also help us to get all subsets of a given slice. If Combination and Partitioning problems can be converted to get root-to-leaf paths during a tree DFS traversal, Subsets can be treated as getting all root-to-node paths during a tree DFS traversal.
Backtracking can also help us to get all subsets of a given slice. If Combination and Partitioning problems can be converted to get root-to-leaf paths during a tree DFS traversal, Subsets can be treated as getting all root-to-node paths during a tree DFS traversal.
It’s similar to #78, the only difference is we can’t have duplicated subsets, which means we can’t pick the same value at the same tree level during traversal.
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to global solution are best fit for Greedy.
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to global solution are best fit for Greedy.
If every child is content, then all children are content. So local optimal leads to a global optimal solution. We can use greedy. Now we want to make more child happy, we can use greedy algorithm to give the children whose gratitude is lower first.
1 2 3 4 5 6 7 8 9 10 11 12 13
import"sort" funcfindContentChildren(g []int, s []int)int { sort.Ints(g) sort.Ints(s) result := 0 for i, j := 0, 0; i < len(g) && j < len(s); j++{ if g[i] <= s[i] { i++ result++ } } return result }
@@ -38,4 +43,4 @@
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
funcmaxProfit(prices []int)int { result := 0 peak := prices[0] valley := prices[0] length := len(prices) for i := 0; i < length - 1; { for i < length - 1 && prices[i] >= prices[i + 1] { i++ } valley = prices[i] for i < length - 1 && prices[i] <= prices[i + 1] { i++ } peak = prices[i] result += peak - valley } return result }
funcmaxProfit(prices []int)int { length := len(prices) dp := make([][2]int, length) // dp[i][0] on day i we are holding stock // dp[i][1] on day i we don't have stock dp[0] = [2]int{-prices[0], 0} max := func(a, b int)int { if a > b { return a } return b } for i := 1; i < length; i++ { // stock we got from day i - 1, or stock we are going to buy on day i dp[i][0] = max(dp[i - 1][0], dp[i - 1][1] - prices[i]) // we don't have stock from day i - 1, or we are going to sell stock we got from day i - 1 dp[i][1] = max(dp[i - 1][1], dp[i - 1][0] + prices[i]) } return max(dp[length - 1][0], dp[length - 1][1]) }
funcmaxProfit(prices []int)int { length := len(prices) dp := make([][2]int, length) // dp[i][0] on day i we are holding stock // dp[i][1] on day i we don't have stock dp[0] = [2]int{-prices[0], 0} max := func(a, b int)int { if a > b { return a } return b } for i := 1; i < length; i++ { // stock we got from day i - 1, or stock we are going to buy on day i dp[i][0] = max(dp[i - 1][0], dp[i - 1][1] - prices[i]) // we don't have stock from day i - 1, or we are going to sell stock we got from day i - 1 dp[i][1] = max(dp[i - 1][1], dp[i - 1][0] + prices[i]) } return max(dp[length - 1][0], dp[length - 1][1]) }
At each step, a greedy jump can give us the local optimal furthest solution. Our global solution can be found in if we always take the greedy jump.
1 2 3 4 5 6 7 8 9 10 11 12 13
funccanJump(nums []int)bool { distance := 0 length := len(nums) for i := 0; i <= distance; i++ { // Note: here we use distance to control which items we can check if distance < i + nums[i] { distance = i + nums[i] } if distance >= length - 1 { returntrue } } returnfalse }
funccanReach(s string, minJump int, maxJump int)bool { queue := []int{0} length := len(s) visited := make(map[int]bool) visited[0] = true edge := 0 min := func(a, b int)int { if a > b { return b } return a } max := func(a, b int)int { if a > b { return a } return b } forlen(queue) > 0 { index := queue[0] if index == length - 1 { returntrue } queue = queue[1:] left := index + minJump right := min(length - 1, index + maxJump) for i := max(edge + 1, left); i <= right; i++ { if s[i] == '0' && !visited[i]{ visited[i] = true queue = append(queue, i) } } edge = right } returnfalse }
funccanReach(s string, minJump int, maxJump int)bool { length := len(s) if s[length - 1] == '0' { min := func(a, b int)int { if a > b { return b } return a } max := func(a, b int)int { if a > b { return a } return b } canVisit := make(map[int]bool) canVisit[0] = true edge := 0 for i := 0; i <= edge && i < length; i++ { if canVisit[i] { left := i + minJump right := min(length - 1, i + maxJump) for j := max(left, edge + 1); j <= right; j++ { if s[j] == '0' { canVisit[j] = true if j == length - 1 { returntrue } } } edge = right } } } returnfalse }
funccanReach(s string, minJump int, maxJump int)bool { length := len(s) if s[length - 1] == '0' { min := func(a, b int)int { if a > b { return b } return a } max := func(a, b int)int { if a > b { return a } return b } canVisit := make(map[int]bool) canVisit[0] = true edge := 0 for i := 0; i <= edge && i < length; i++ { if canVisit[i] { left := i + minJump right := min(length - 1, i + maxJump) for j := max(left, edge + 1); j <= right; j++ { if s[j] == '0' { canVisit[j] = true if j == length - 1 { returntrue } } } edge = right } } } returnfalse }
To get a maximum sum, we need to convert as many negative numbers to positive ones. If there is still an odd times of converting number left, we just need to convert the smallest positive number to a negative one
To get a maximum sum, we need to convert as many negative numbers to positive ones. If there is still an odd times of converting number left, we just need to convert the smallest positive number to a negative one
funclargestSumAfterKNegations(nums []int, k int)int { sort.Ints(nums) i := 0 for i < k && i < len(nums) { if nums[i] < 0 { nums[i] = -nums[i] i++ } else { break } } if i < k && (k - i) % 2 == 1 { sort.Ints(nums) nums[0] = -nums[0] } result := 0 for _, num := range nums { result += num } return result }
Several cases: 1) If the total amount of gas is less than the total amount of the cost, we can’t make a round trip 2) Given an arbitrary start point i, and at i we have gas[i] in the tank. Let’s start at this point and accumulate the gas we left in the tank. If at point i + k the acculation is negative, it means we can’t reach from any point in beteeen [i, i + k - 1) to point k. So we can quickly start from i + k + 1 instead of i + 1.
@@ -29,4 +34,4 @@
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
funcfindMinArrowShots(points [][]int)int { sort.Slice(points, func(a, b int)bool { return points[a][0] < points[b][0] }) result := 1 for i := 1; i < len(points); i++ { if points[i][0] > points[i - 1][1] { result++ } elseif points[i][1] > points[i - 1][1] { points[i][1] = points[i - 1][1] } } return result
This one is a similar problem as #452. An intuition to solve this kind of problem is sort if first. Since all line segments have two points, we have two choices to sort it. The local optimal to find the interval is the end of current segment should have a distance between the next one’s start point. With this in mind, we can quickly get the total of intervals. So if we sort by the end point, we can iterate from left to right. Otherwise, we need to reverse the iteration order.
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14
funceraseOverlapIntervals(intervals [][]int)int { sort.Slice(intervals, func(a, b int)bool { return intervals[a][1] < intervals[b][1] }) end := intervals[0][1] count := 1 for i := 1; i < len(intervals); i++ { if end <= intervals[i][0] { count++ end = intervals[i][1] } } returnlen(intervals) - count }
Dynamic Programming (commonly referred to as DP) is an algorithmic technique for solving a problem by recursively breaking it down into simpler subproblems and using the fact that the optimal solution to the overall problem depends upon the optimal solution to it’s individual subproblems. Here is an interesting Quora question How should I explain dynamic programming to a 4-year-old?.
Dynamic Programming (commonly referred to as DP) is an algorithmic technique for solving a problem by recursively breaking it down into simpler subproblems and using the fact that the optimal solution to the overall problem depends upon the optimal solution to it’s individual subproblems. Here is an interesting Quora question How should I explain dynamic programming to a 4-year-old?.
Based on the BST specs, we can get the state transition function dp[i] = dp[j] * dp[i - j - 1], here dp[i] denotes when i is set to the root node, we have j nodes on left child and i - j - 1 on right child. Note here the base case is 1. If there’s 0 nodes on left tree, it means we can construct the left tree in one uniq way.
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
funcnumTrees(n int)int { if n < 3 { return n } dp := make([]int, n + 1) dp[0] = 1 dp[1] = 1 dp[2] = 2 for i := 3; i <= n; i++ { for j := 0; j < i; j++ { dp[i] += dp[j] & dp[i - j - 1] } } return dp[n] }
Similar to other sgement related problems. The first thing we need to do is to sort the slice. Once we have a sorted segment slice, we can iterate over all items and merge them. Note there is one edge case we need to cover after the iteration, either we merged all segments into one or the last one can’t be merged into the previous segment.
Similar to other sgement related problems. The first thing we need to do is to sort the slice. Once we have a sorted segment slice, we can iterate over all items and merge them. Note there is one edge case we need to cover after the iteration, either we merged all segments into one or the last one can’t be merged into the previous segment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funcmerge(intervals [][]int) [][]int { sort.Slice(intervals, func(a, b int)bool { return intervals[a][0] < intervals[b][0] }) result := []int{} start := intervals[0][0] end := intervals[0][1] for i := 0; i < len(intervals); i++ { if end < intervals[i][0] { result = append(result, []int{start, end}) start = intervals[i][0] end = intervals[i][1] } elseif end < intervals[i][1] { end = intervals[i][1] } } result = append(result, []int{start, end}) return result }
\ No newline at end of file
diff --git a/docs/2021/10/07/Knapsack Problem I/index.html b/docs/2021/10/07/Knapsack Problem I/index.html
index db3b7feca..d4e0514bc 100644
--- a/docs/2021/10/07/Knapsack Problem I/index.html
+++ b/docs/2021/10/07/Knapsack Problem I/index.html
@@ -1,13 +1,18 @@
-Knapsack Problems I | Beendless ~ 快节奏,慢生活,无止境
+Knapsack Problems I | Beendless ~ 快节奏,慢生活,无止境
-
The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
We can quickly get the result if the sum is an odd number. If the sum is an even number s, it means we need to find some items in the slice which can get a sum to s / 2. Now the problem becomes a classical 0-1 knapsack problem.
a. Classical 0-1 knapsack solution
Denote dp[i][j] as if we can construct a sum j from the first i items. So the state transition function is dp[i][j] = dp[i - 1][j] || dp[i - 1][j - nums[i]]. The value is determined by:
@@ -54,4 +59,4 @@
1 2 3 4 5 6 7 8 9 10
dp := make([]int, amount + 1) dp[0] = // Initial value based on the problem // dp[i] = // Initial value based on the problem, could be 0 for total solutions counting or min/max value to get the maximum/minimum expectation for i := 0; i < len(nums); i++ { for j := nums[i]; j <= amount; j++ { // state transition function // dp[j] += dp[j - nums[i]] // dp[j] = min(dp[j], dp[j - nums[i]] + nums[i]) } }
full knapsack template to get the permutation of items
-
1 2 3 4 5 6 7 8 9 10
dp := make([]int, amount + 1) dp[0] = // Initial value based on the problem // dp[i] = // Initial value based on the problem, could be 0 for total solutions counting or min/max value to get the maximum/minimum expectation for j := 0; j <= amount; j++ { for i := 0; i < len(nums); i++ { // state transition function // dp[j] += dp[j - nums[i]] // dp[j] = min(dp[j], dp[j - nums[i]] + nums[i]) } }
dp := make([]int, amount + 1) dp[0] = // Initial value based on the problem // dp[i] = // Initial value based on the problem, could be 0 for total solutions counting or min/max value to get the maximum/minimum expectation for j := 0; j <= amount; j++ { for i := 0; i < len(nums); i++ { // state transition function // dp[j] += dp[j - nums[i]] // dp[j] = min(dp[j], dp[j - nums[i]] + nums[i]) } }
\ No newline at end of file
diff --git a/docs/2021/10/09/Knapsack Problem II/index.html b/docs/2021/10/09/Knapsack Problem II/index.html
index 29dfa915b..6494d51d6 100644
--- a/docs/2021/10/09/Knapsack Problem II/index.html
+++ b/docs/2021/10/09/Knapsack Problem II/index.html
@@ -1,13 +1,18 @@
-Knapsack Problems II | Beendless ~ 快节奏,慢生活,无止境
+Knapsack Problems II | Beendless ~ 快节奏,慢生活,无止境
-
This is also a full knapsack problem. It looks similar to the coins change ii, but the difference here is that we need to get the permutation of the solutions instead of combination. So in this case we need to iterate the knapsack space first, then iterate the items.
This is also a full knapsack problem. It looks similar to the coins change ii, but the difference here is that we need to get the permutation of the solutions instead of combination. So in this case we need to iterate the knapsack space first, then iterate the items.
1 2 3 4 5 6 7 8 9 10 11 12
funccombinationSum4(nums []int, target int)int { dp := make([]int, target + 1) dp[0] = 1 for i := 1; i <= target; i++ { for j := 0; j < len(nums); j++ { if i >= nums[j] { dp[i] += dp[i - nums[j]] } } } return dp[target] }
This is also a full knapsack problem. We can consider all squares not greater than the given n as items, n as the knapsack total capacity, and we can reuse all items to fill the knapsack. To get the minimal items, the state transition function is dp[i] = min(dp[i], dp[i - j] + 1
@@ -20,4 +25,4 @@
1 2 3 4 5 6 7 8 9 10 11 12
funcwordBreak(s string, wordDict []string)bool { dp := make([]bool, len(s) + 1) dp[0] = true for i := 1; i <= len(s); i++ { for j := 0; j < len(wordDict); j++ { if i >= wordDict[j] { dp[i] = dp[i] || (dp[i - len(wordDict[j])] && wordDict[j] == s[i - len(wordDict[j]):i]) } } } return dp[len(s)] }
We can also optimize it with a hashmap to store all words
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14
funcwordBreak(s string, wordDict []string)bool { dp := make([]bool, len(s) + 1) hash := make(map[string]bool) for _, word := range wordDict { hash[word] = true } dp[0] = true for i := 1; i <= len(s); i++ { for j := 0; j < i; j++ { dp[i] = dp[i] || (dp[j] && hash[s[j:i]]) } } return dp[len(s)] }
Since the profit is defined by the current price and the minimum price before today. So we can have one pointer holds the minimum price so far, and another pointer holds the max price so far, with the lowerest peak from the left and highest peak from the right, we can get the maximum profit.
1 2 3 4 5 6 7 8 9 10 11 12
funcmaxProfit(prices []int)int { profit := 0 for min, i := 100001, 0; i < len(prices); i++ { if min > prices[i] { min = prices[i] } if prices[i] - min > profit { profit = prices[i] - min } } return profit }
If we define dp[i] as the longest increasing subsequence of [0, i]. Then dp[i] >= 1. And the state transition function is dp[i] = max(dp[i], dp[j] + 1) here j ∈ [0, i).
If we define dp[i] as the longest increasing subsequence of [0, i]. Then dp[i] >= 1. And the state transition function is dp[i] = max(dp[i], dp[j] + 1) here j ∈ [0, i).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funclengthOfLIS(nums []int)int { n := len(nums) dp := make([]int, n) dp[0] = 1 result = 1 for i := 1; i < n; i++ { dp[i] = 1 for j := 0; j < i; j++ { if nums[i] > nums[j] { dp[i] = max(dp[i], dp[j] + 1) } } result = max(result, dp[i]) } return result }
funcisSubsequence(s string, t string)bool { m := len(s) n := len(t) if m > n { returnfalse } dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) } for i := 1; i <= m; i++ { for j := 1; j <= n; j++ { dp[i][j] = dp[i - 1][j - 1] if s[i - 1] == t[j - 1] { dp[i][j]++ } } } return dp[m][n] == m }
Let’s denote dp[i][j] as the amount of distinct subsequences in s[:i] which can construct t[:j]. So we can get the state transition function dp[i][j] = s[i - 1] == t[i - 1] ? (dp[i - 1][j - 1] + dp[i - 1][j] : dp[i-1][j]. Also for the initial value, dp[i][0] needs to be 0 (it means there’s one way we can construct empty string from s[:i]).
Let’s denote dp[i][j] as the amount of distinct subsequences in s[:i] which can construct t[:j]. So we can get the state transition function dp[i][j] = s[i - 1] == t[i - 1] ? (dp[i - 1][j - 1] + dp[i - 1][j] : dp[i-1][j]. Also for the initial value, dp[i][0] needs to be 0 (it means there’s one way we can construct empty string from s[:i]).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funcnumDistinct(s string, t string)int { m := len(s) n := len(t) dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) dp[i][0] = 1 } for i := 1; i <= m; i++ { for j := 1; j <= n; j++ { if s[i - 1] == t[j - 1] { dp[i][j] = dp[i - 1][j] + dp[i - 1][j - 1] } else { dp[i][j] = dp[i - 1][j] } } } return dp[m][n] }
funclongestPalindrome(s string)string { n := len(s) expanding := func(l, r int)int { // return the longest length of the parlindrom in the given substring for l >= 0 && r < n && s[l] == s[r] { l-- r++ } return r - l - 1 } max := 0 start := 0 for i := 0; i < n; i++ { p1 := expanding(i, i) p2 := expanding(i, i + 1) if p1 > p2 && max < p1 { start = i - (p1 - 1) / 2 max = p1 } elseif max < p2 && p1 < p2 { start = i - (p2 - 2) / 2 max = p2 } } return s[start:start + max] }
Monotonic Stack is the best time complexity solution for many “range queries in an array” problems. Because every element in the array could only enter the monotonic stack once, the time complexity is O(N). (N represents the length of the array).
Monotonic Stack is the best time complexity solution for many “range queries in an array” problems. Because every element in the array could only enter the monotonic stack once, the time complexity is O(N). (N represents the length of the array).
funclargestRectangleArea(heights []int)int { // The reason we have a 0 at the end is if the given heights is a sorted ascending array, we will push everything to the stack without doing anything. heights := append(heights, 0) n := len(heights) result := 0 stack := []int{} for i := 0; i < n; i++ { forlen(stack) > 0 && height[i] < height[stack[len(stack) - 1]] { h := heights[stack[len(stack) - 1]] stack = stack[:len(stack) - 1] w := i iflen(stack) > 0 { w = i - stack[len(stack) - 1] - 1 } area := h * w if result < area { result = area } } stack = append(stack, i) } }
funclargestRectangleArea(heights []int)int { // The reason we have a 0 at the end is if the given heights is a sorted ascending array, we will push everything to the stack without doing anything. heights := append(heights, 0) n := len(heights) result := 0 stack := []int{} for i := 0; i < n; i++ { forlen(stack) > 0 && height[i] < height[stack[len(stack) - 1]] { h := heights[stack[len(stack) - 1]] stack = stack[:len(stack) - 1] w := i iflen(stack) > 0 { w = i - stack[len(stack) - 1] - 1 } area := h * w if result < area { result = area } } stack = append(stack, i) } }
\ No newline at end of file
diff --git a/docs/2021/10/18/Mastering Go Notes/index.html b/docs/2021/10/18/Mastering Go Notes/index.html
index eb363723b..30db204b4 100644
--- a/docs/2021/10/18/Mastering Go Notes/index.html
+++ b/docs/2021/10/18/Mastering Go Notes/index.html
@@ -1,13 +1,18 @@
-Mastering Go Notes | Beendless ~ 快节奏,慢生活,无止境
+Mastering Go Notes | Beendless ~ 快节奏,慢生活,无止境
-
If you have to check godoc offline, you could install gdoc go get golang.org/x/tools/cmd/godoc and then run godoc -http :8001 in termilal.
Go consider the main() function the entry point to the application and begins the execution of the applicaiton with the code found in the main() function of the main package.
@@ -21,4 +26,4 @@
The os.Argsstring slice is properly initialized by Go and is available to the program when referenced.
If we have a tiny web service which return the host name as below. We can use golang image and build the executable package, then move it into a basic linux container like alpine.
+
Multi-Stage Golang Docker Image Build and Kubernetes Deployment
If we have a tiny web service which return the host name as below. We can use golang image and build the executable package, then move it into a basic linux container like alpine.
funchandler(w http.ResponseWriter, r *http.Request) { name, err := os.Hostname() if err != nil { fmt.Fprintf(w, "Can't get hostname") } else { fmt.Fprintf(w, "Go Hostname: %s\n", name) } }
funcmain() { r := mux.NewRouter() r.PathPrefix("/").HandlerFunc(handler) srv := &http.Server{ Handler: r, Addr: ":8000", // Good practice: enforce timeouts for servers you create! WriteTimeout: 15 * time.Second, ReadTimeout: 15 * time.Second, }
log.Fatal(srv.ListenAndServe()) }
1 2 3 4 5 6 7 8 9 10 11 12 13 14
FROM golang:1.17 AS builder # https://stackoverflow.com/questions/61515186/when-using-cgo-enabled-is-must-and-what-happens # https://gist.github.com/blessdyb/ebe59987e4a4632b28c10ec74a1eda0c ENV CGO_ENABLED=0 WORKDIR /build COPY . . RUN go mod download RUN go build -o app
It might be a dish foreign for you; in fact, it was foreign to me too. Ma-Po tofu originates from a region in China but far away from my hometown. The first time that I had it was in my first year of college. It and me, both had left home, just met in a small restaurant next to our campus. Gladly, I was not alone, neither was the dish. There were new friends of mine sitting around a table, and Ma-Po Tofu took the center, having always had the charm to attract people from all over the country. It may be welcomed by people from all over the world one day.
It might be a dish foreign for you; in fact, it was foreign to me too. Ma-Po tofu originates from a region in China but far away from my hometown. The first time that I had it was in my first year of college. It and me, both had left home, just met in a small restaurant next to our campus. Gladly, I was not alone, neither was the dish. There were new friends of mine sitting around a table, and Ma-Po Tofu took the center, having always had the charm to attract people from all over the country. It may be welcomed by people from all over the world one day.
On a simple plate with certain depth was Ma-Po tofu resting. The white plate made a perfect extension of the dish as the tofu was white itself yet the sauce all over it was flame red. The depth of the plate had to be perfect too as the sauce could be a little glutinous and soupy at the same time. A flat plate too shallow would not match the thickness of Ma-Po tofu. The tofu was cut into small cubes like dice and partly immersed in spicy vegetable oil. Red chili powder and green onion bits decorated the tofu cubes, like sprinkles, adding not only rich flavors but also festivity to the dish. Chopped garlic and ginger pieces may be easily ignored at the first glance, but they were still easier to detect than tiny, ground meat. The meat had been crisped in hot oil, being prepared to support the main role of tofu. Yes, the meat had to surrender and obey the spicy and fragrant of the rest of ingredients. The only purpose for all the ingredients was to give a wondrous taste for tofu that is usually considered the least tasty bean-based curd. While the original flavor of tofu retained, a variety of spices that marked the Sichuan region, from which the Ma-Po tofu originated, wrapped each piece of tofu. Aha! The wondrous taste had its most contribution from a special spice, Chinese pepper, the kind that numbs my tongue! It was ground and dredged on the ready dish at last. The brown powder camouflaged itself until my first bite. Anyone can recognize its distinctiveness.
The most recommended way to take Ma-Po tofu is to dip the tofu in the sauce a bit or to just use a spoon to take one piece of tofu and the sauce together. The essence in the taste was an incredible integration of different types of spicy tastes at different layers. The numbing sense gave it special magic to level up that mixed taste. Several small pieces of meat occasionally clung to the tofu or garlic. Although very small, the meat was crispy outside and tender inside, adding a playful element to other ingredients. Besides the dynamic flavors, tofu was still being itself: it was tender, smooth, pleasantly chewy, soft yet firm, just as it was before it was cooked. The texture of the tofu for this dish was something in-between of marshmallows and cheese. The tenderness and firmness remained; yet another level of slipperiness was built upon it because now the tofu was made hot, not only hot as spicy but also hot in temperature. At least it was warm enough that I wanted to take my time before taking anything from it. While the rich taste fills every corner of my mouth at once the first bite that I just had would smoothly slide to reach my throat and esophagus. Usually, eaters could not resist the fragrant warmth as soon as the dish was served, so they jumped into it. Just like taking an adventure in a wonder land of spices, you might want to do it slowly before suddenly getting your tongue, throat, and esophagus numbed or burned. Don’t ask me how I know it!
I remembered my lesson ever since but not always successful in applying it; my husband knows that because I sometimes make it at home now. Ma-Po tofu is a test even for masters as making tofu colorful and tasty is inherently a difficult task. But you do not have to make it as stylish as a chef’s signature, it is a treat itself. As a classic in the region where it originates, Ma-Po tofu is an easy, simple, low-key dish that everyone can make very home. It is popular but also homey. Its warmth and amora reminds eaters everything about home. This might have been the reason that we ordered a Ma-Po tofu at the dinner table when a bunch of first-year college students became friends and shared meals together.
\ No newline at end of file
diff --git a/docs/2021/11/15/Set up Kubeadm on Macbook with Vagrant/index.html b/docs/2021/11/15/Set up Kubeadm on Macbook with Vagrant/index.html
index 16178d441..5a7adb9cf 100644
--- a/docs/2021/11/15/Set up Kubeadm on Macbook with Vagrant/index.html
+++ b/docs/2021/11/15/Set up Kubeadm on Macbook with Vagrant/index.html
@@ -1,13 +1,18 @@
-Set up Kubeadm on MacOS with Vagrant and VirtualBox | Beendless ~ 快节奏,慢生活,无止境
+Set up Kubeadm on MacOS with Vagrant and VirtualBox | Beendless ~ 快节奏,慢生活,无止境
-
Set up Kubeadm on MacOS with Vagrant and VirtualBox
Based on kubeadm installation instructions, we can’t directly install it on MacOS. But with the help of Vagrant and VirtualBox, we can quickly create a local kubenetes cluster.
+
Set up Kubeadm on MacOS with Vagrant and VirtualBox
Based on kubeadm installation instructions, we can’t directly install it on MacOS. But with the help of Vagrant and VirtualBox, we can quickly create a local kubenetes cluster.
Create three virtualbox instances, one as a master node and the other two as workder nodes. You can use this Vagrantfile. Basically we will: a. Use ubuntu/bionic64 as the OS b. Set up a private network and use IP subnet “192.168.5.X” (master nodes will use 192.168.5.1X, worker nodes will use 192.168.5.2X) c. Update /etc/hosts to set up the host record to all nodes d. Add Google’s open dns to /etc/resolver.conf file e. Once you finish the above process, you can run vagrant status, you will get something like below
Follow https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ to create a cluster. a. Initializing a control-plane node kubeadm init --pod-network-cid=10.244.0.0/16 --apiserver-advertise-address=192.1168.5.11 . Here 10.244.0.0/16 specifies the subnet for pods on worker nodes. You can give a different one. Once it’s installed successfully, you can run kubectl get nodes and the master nodes will be displayed as NotReady as expected b. Installing a Pod network add-on by following https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model. Here we choose WeaveNet which doesn’t need any additional configuration. c. Join the worker nodes to the cluster. If you forget the tokens, you can run kubeadm token create --print-join-command and run the kubeadm join command in all worker nodes.
Now if you run kubectl get nodes, you can get a result as below.
1 2 3 4
NAME STATUS ROLES AGE VERSION kubemaster Ready control-plane,master 9h v1.22.3 kubenode01 Ready <none> 9h v1.22.3 kubenode02 Ready <none> 9h v1.22.3
Welcome to our house! Living here are two writers: Ym is writing software programs and Yf is writing stories. Would you like to know them? You can follow Ym in his Github.
\ No newline at end of file
diff --git a/docs/atom.xml b/docs/atom.xml
index 3e19da6a3..43e3e7692 100644
--- a/docs/atom.xml
+++ b/docs/atom.xml
@@ -6,7 +6,7 @@
- 2021-11-15T20:30:07.732Z
+ 2021-11-27T08:03:05.936Zhttp://blog.beendless.com/
@@ -21,7 +21,7 @@
http://blog.beendless.com/2021/11/15/Set%20up%20Kubeadm%20on%20Macbook%20with%20Vagrant/2021-11-15T08:45:23.000Z
- 2021-11-15T20:30:07.732Z
+ 2021-11-27T08:03:05.936ZBased on kubeadm installation instructions, we can’t directly install it on MacOS. But with the help of Vagrant and VirtualBox, we can quickly create a local kubenetes cluster.
Create three virtualbox instances, one as a master node and the other two as workder nodes. You can use this Vagrantfile. Basically we will: a. Use ubuntu/bionic64 as the OS b. Set up a private network and use IP subnet “192.168.5.X” (master nodes will use 192.168.5.1X, worker nodes will use 192.168.5.2X) c. Update /etc/hosts to set up the host record to all nodes d. Add Google’s open dns to /etc/resolver.conf file e. Once you finish the above process, you can run vagrant status, you will get something like below
Follow https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ to create a cluster. a. Initializing a control-plane node kubeadm init --pod-network-cid=10.244.0.0/16 --apiserver-advertise-address=192.1168.5.11 . Here 10.244.0.0/16 specifies the subnet for pods on worker nodes. You can give a different one. Once it’s installed successfully, you can run kubectl get nodes and the master nodes will be displayed as NotReady as expected b. Installing a Pod network add-on by following https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model. Here we choose WeaveNet which doesn’t need any additional configuration. c. Join the worker nodes to the cluster. If you forget the tokens, you can run kubeadm token create --print-join-command and run the kubeadm join command in all worker nodes.
Now if you run kubectl get nodes, you can get a result as below.
1 2 3 4
NAME STATUS ROLES AGE VERSION kubemaster Ready control-plane,master 9h v1.22.3 kubenode01 Ready <none> 9h v1.22.3 kubenode02 Ready <none> 9h v1.22.3
]]>
@@ -45,7 +45,7 @@
http://blog.beendless.com/2021/10/26/Ma-Po%20Tofu/2021-10-26T07:00:24.000Z
- 2021-10-26T16:28:55.824Z
+ 2021-11-27T08:03:05.925ZIt might be a dish foreign for you; in fact, it was foreign to me too. Ma-Po tofu originates from a region in China but far away from my hometown. The first time that I had it was in my first year of college. It and me, both had left home, just met in a small restaurant next to our campus. Gladly, I was not alone, neither was the dish. There were new friends of mine sitting around a table, and Ma-Po Tofu took the center, having always had the charm to attract people from all over the country. It may be welcomed by people from all over the world one day.
On a simple plate with certain depth was Ma-Po tofu resting. The white plate made a perfect extension of the dish as the tofu was white itself yet the sauce all over it was flame red. The depth of the plate had to be perfect too as the sauce could be a little glutinous and soupy at the same time. A flat plate too shallow would not match the thickness of Ma-Po tofu. The tofu was cut into small cubes like dice and partly immersed in spicy vegetable oil. Red chili powder and green onion bits decorated the tofu cubes, like sprinkles, adding not only rich flavors but also festivity to the dish. Chopped garlic and ginger pieces may be easily ignored at the first glance, but they were still easier to detect than tiny, ground meat. The meat had been crisped in hot oil, being prepared to support the main role of tofu. Yes, the meat had to surrender and obey the spicy and fragrant of the rest of ingredients. The only purpose for all the ingredients was to give a wondrous taste for tofu that is usually considered the least tasty bean-based curd. While the original flavor of tofu retained, a variety of spices that marked the Sichuan region, from which the Ma-Po tofu originated, wrapped each piece of tofu. Aha! The wondrous taste had its most contribution from a special spice, Chinese pepper, the kind that numbs my tongue! It was ground and dredged on the ready dish at last. The brown powder camouflaged itself until my first bite. Anyone can recognize its distinctiveness.
The most recommended way to take Ma-Po tofu is to dip the tofu in the sauce a bit or to just use a spoon to take one piece of tofu and the sauce together. The essence in the taste was an incredible integration of different types of spicy tastes at different layers. The numbing sense gave it special magic to level up that mixed taste. Several small pieces of meat occasionally clung to the tofu or garlic. Although very small, the meat was crispy outside and tender inside, adding a playful element to other ingredients. Besides the dynamic flavors, tofu was still being itself: it was tender, smooth, pleasantly chewy, soft yet firm, just as it was before it was cooked. The texture of the tofu for this dish was something in-between of marshmallows and cheese. The tenderness and firmness remained; yet another level of slipperiness was built upon it because now the tofu was made hot, not only hot as spicy but also hot in temperature. At least it was warm enough that I wanted to take my time before taking anything from it. While the rich taste fills every corner of my mouth at once the first bite that I just had would smoothly slide to reach my throat and esophagus. Usually, eaters could not resist the fragrant warmth as soon as the dish was served, so they jumped into it. Just like taking an adventure in a wonder land of spices, you might want to do it slowly before suddenly getting your tongue, throat, and esophagus numbed or burned. Don’t ask me how I know it!
I remembered my lesson ever since but not always successful in applying it; my husband knows that because I sometimes make it at home now. Ma-Po tofu is a test even for masters as making tofu colorful and tasty is inherently a difficult task. But you do not have to make it as stylish as a chef’s signature, it is a treat itself. As a classic in the region where it originates, Ma-Po tofu is an easy, simple, low-key dish that everyone can make very home. It is popular but also homey. Its warmth and amora reminds eaters everything about home. This might have been the reason that we ordered a Ma-Po tofu at the dinner table when a bunch of first-year college students became friends and shared meals together.
]]>
@@ -69,7 +69,7 @@
http://blog.beendless.com/2021/10/19/Multi-Stage%20Golang%20Docker%20Image/2021-10-19T18:15:24.000Z
- 2021-10-21T07:15:26.055Z
+ 2021-11-27T08:03:05.930ZIf we have a tiny web service which return the host name as below. We can use golang image and build the executable package, then move it into a basic linux container like alpine.
funchandler(w http.ResponseWriter, r *http.Request) { name, err := os.Hostname() if err != nil { fmt.Fprintf(w, "Can't get hostname") } else { fmt.Fprintf(w, "Go Hostname: %s\n", name) } }
funcmain() { r := mux.NewRouter() r.PathPrefix("/").HandlerFunc(handler) srv := &http.Server{ Handler: r, Addr: ":8000", // Good practice: enforce timeouts for servers you create! WriteTimeout: 15 * time.Second, ReadTimeout: 15 * time.Second, }
log.Fatal(srv.ListenAndServe()) }
1 2 3 4 5 6 7 8 9 10 11 12 13 14
FROM golang:1.17 AS builder # https://stackoverflow.com/questions/61515186/when-using-cgo-enabled-is-must-and-what-happens # https://gist.github.com/blessdyb/ebe59987e4a4632b28c10ec74a1eda0c ENV CGO_ENABLED=0 WORKDIR /build COPY . . RUN go mod download RUN go build -o app
If you have to check godoc offline, you could install gdoc go get golang.org/x/tools/cmd/godoc and then run godoc -http :8001 in termilal.
Go consider the main() function the entry point to the application and begins the execution of the applicaiton with the code found in the main() function of the main package.
Everything that begins with a lowercase letter is considered private and is accessible in the current package only.
If no initial value is given to a variable, the Go compiler will automatically initialize that variable to the zero value of its data type.
The var keyword is mostly used for declaring global or local variables without an initial value. Since every statement that exists outside of the code of a function must begin with a keywoprd such as func, const or var, you can’t use short assignment statement := outside of the function.
The os.Argsstring slice is properly initialized by Go and is available to the program when referenced.
]]>
@@ -124,7 +124,7 @@
http://blog.beendless.com/2021/10/16/Monotonic%20Stack/2021-10-16T18:15:24.000Z
- 2021-10-17T19:24:11.084Z
+ 2021-11-27T08:03:05.929ZMonotonic Stack is the best time complexity solution for many “range queries in an array” problems. Because every element in the array could only enter the monotonic stack once, the time complexity is O(N). (N represents the length of the array).
We use a monotonic stack to iterate over the nums2 array. During the iteration, if we find that the top stack value is lower than the current value and the top stack value exist in nums1, it means we get the one result in nums1. Since all elements are unique, we don’t need to worry about the override risk.
We can concat two nums2 to one and get a longer list of result, then reduce the result size to half. It will be a same problem as the above one. We can also use mod operator to get the correct index
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funcnextGreaterElements(nums []int) []int { n := len(nums) stack := []int{0} result := make([]int, n) for i := 0; i < n; i++ { result[i] = -1 } for i := 1; i < 2 * n; i++ { forlen(stack) > 0 && nums[i % n] > nums[stack[len(stack) - 1]] { result[stack[len(stack) - 1]] = nums[i % n] stack = stack[:len(stack) - 1] } stack = append(stack, i % n) } return result }
functrap(height []int)int { n := len(height) left := make([]int, n) right := make([]int, n) left[0] = height[0] right[n - 1] = height[n - 1] max := func(a, b int)int { if a > b { return a } return b } min := func(a, b int)int { if a > b { return b } return a } for i := 1; i < n; i++ { left[i] = max(height[i], left[i - 1]) } for i := n - 2; i >= 0; i-- { right[i] = max(height[i], right[i + 1]) } result := 0 for i := 0; i < n; i++ { diff := min(left[i], right[i]) - height[i] if diff > 0 { result += diff } } return result }
b. Monotonic Stack
We will get the water layer by layer vertically for each item. [https://leetcode.wang/leetCode-42-Trapping-Rain-Water.html]. Basically, the idea is we have at least two items in the stack, if the stack top is smaller than the current one, it means the stack top is the bottom of the rain trapper, and the second top one in stack is the left bounary and the current one is the right boundary.
1 2 3 4 5 6 7
___ 5| | 4| | ___ 3| | | | 2| |__ | | 1|____|__|_| a b c d e
so result = (min(d, c) - 0) * (d - c) + (min(d, b) - c ) * (d - b)
functrap(height []int)int { result := 0 n := len(height) max := func(a, b int)int { if a > b { return a } return b } min := func(a, b int)int { if a > b { return b } return a } for i := 1; i < n - 1; i++ { left := height[i] for l := i - 1; l >= 0; l-- { left = max(left, height[l]) } right := height[i] for r := i + 1; r < len(height); r++ { right = max(right, height[r]) } amount := min(left, right) - height[i] if amount > 0 { result += amount } } return result }
funclargestRectangleArea(heights []int)int { n := len(heights) left := make([]int, n) left[0] = -1 right := make([]int, n) right[n - 1] = n for i := 1; i < n; i++ { t := i - 1 for t >= 0 && heights[t] >= heights[i] { t = left[t] } left[i] = t } for i := n - 2; i >= 0; i-- { t := i + 1 for t < n && heights[t] >= heights[i] { t = right[t] } right[i] = t } result := 0 for i := 0; i < n; i++ { area := heights[i] * (right[i] - left[i] - 1) if result < area { result = area } } return result }
b. Monotonic Stack solution
We can use a monotonic stack to maintain the higher bars’s indices in ascending order. When encounter a lower bar, pop the tallest bar and use it as the bottleneck to compute the area.
funclargestRectangleArea(heights []int)int { // The reason we have a 0 at the end is if the given heights is a sorted ascending array, we will push everything to the stack without doing anything. heights := append(heights, 0) n := len(heights) result := 0 stack := []int{} for i := 0; i < n; i++ { forlen(stack) > 0 && height[i] < height[stack[len(stack) - 1]] { h := heights[stack[len(stack) - 1]] stack = stack[:len(stack) - 1] w := i iflen(stack) > 0 { w = i - stack[len(stack) - 1] - 1 } area := h * w if result < area { result = area } } stack = append(stack, i) } }
Let’s denote dp[i][j] as the amount of distinct subsequences in s[:i] which can construct t[:j]. So we can get the state transition function dp[i][j] = s[i - 1] == t[i - 1] ? (dp[i - 1][j - 1] + dp[i - 1][j] : dp[i-1][j]. Also for the initial value, dp[i][0] needs to be 0 (it means there’s one way we can construct empty string from s[:i]).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funcnumDistinct(s string, t string)int { m := len(s) n := len(t) dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) dp[i][0] = 1 } for i := 1; i <= m; i++ { for j := 1; j <= n; j++ { if s[i - 1] == t[j - 1] { dp[i][j] = dp[i - 1][j] + dp[i - 1][j - 1] } else { dp[i][j] = dp[i - 1][j] } } } return dp[m][n] }
funcminDistance(word1 string, word2 string)int { m := len(word1) n := len(word2) dp := make([][]int, m + 1) lcs := 0 max := func(a, b int)int { if a > b { return a } return b } for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) for j := 0; j <= n; j++ { if i == 0 || j == 0 { continue } elseif word1[i - 1] == word2[j - 1] { dp[i][j] = 1 + dp[i-1][j-1] } else { dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]) } lcs = max(lcs, dp[i][j]) } } return m + n - 2 * lcs }
b. Intuitive dynamic programming solution.
Let’s denote dp[i][j] as the minimum delete operation to match word1[:i] and word2[:j]. So the state transition function is dp[i][j] = word1[i-1] == word2[j-1] ? dp[i-1][j-1] : max(dp[i - 1][j] + 1, dp[i][j-1] + 1, dp[i-1][j-1] + 2).
This one is similar as the above one. Let’s denote dp[i][j] as the edit distance between word1[:i] and word2[:j]. So if word1[i] == word2[j], we get dp[i][j] = dp[i - 1][j-1]. Otherwise, there will be three cases:
add/delete one from word1, so dp[i-1][j] + 1 or dp[i][j-1] + 1
add/delete one from word2, so dp[i-1][j] + 1 or dp[i][j-1] + 1
replace one from either word1 or word2, so dp[i-1][j-1] + 1
So dp[i][j] = 1 + min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])
Let’s denote dp[i][j] as a boolean value to identify if substring s[j:i] is a parlindrom or not. So if s[j] != s[i], then dp[i][j] is false. Otherwise, there are three cases:
i - j <= 1, so dp[i][j] = true
i - j > 1, so dp[i][j] = dp[i - 1][j + 1]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
func countSubstrings(s string) int { n := len(s) dp := make([][]bool, n + 1) result := 0 for i := 0; i <= n; i++ { dp[i] = make([]bool, n + 1) for j := 0; j <= i; j++ { if i == 0 || j == 0 { continue } else if s[j - 1] == s[i - 1] { if i - j <= 1 || dp[i - 1][j + 1] { dp[i][j] = true result++ } } } } return result }
b. Two pointers expand from center solution
All parlindrom related problems, we can try to use two pointers solution, we selecte the middle point (it could be only one pointer or two pointers), then we expand to left and right.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
funccountSubstrings(s string)int { n := len(s) expanding := func(l, r int)int { // return the numbers of parlindrom substrings the given string contains result := 0 for l >= 0 && r < n && s[l] == s[r] { result++ l-- r++ } return result } result := 0 for i := 0; i < n; i++ { result += expanding(i, i) result += expanding(i, i + 1) } return result }
Denote dp[i][j] as the longest palindromic sequence in s[i:j], so if s[i] == s[j], dp[i][j] = 2 + dp[i + 1][j - 1]. Otherwise dp[i][j] = max(dp[i][j - 1], dp[i + 1][j]). Since dp[i][j] depends on dp[i+1][?] value, we should reverse the for loop
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
funclongestPalindromeSubseq(s string)int { n := len(s) dp := make([][]int, n) for i := 0; i < n; i++ { dp[i] = make([]int, n) dp[i][i] = 1 } for i := n - 1; i >= 0; i-- { for j = i + 1; j < n; j++ { if s[i] == s[j] { dp[i][j] = dp[i + 1][j - 1] + 2 } else { dp[i][j] = max(dp[i + 1][j], dp[i][j - 1]) } } } return dp[0][n - 1] }
funclongestPalindrome(s string)string { n := len(s) expanding := func(l, r int)int { // return the longest length of the parlindrom in the given substring for l >= 0 && r < n && s[l] == s[r] { l-- r++ } return r - l - 1 } max := 0 start := 0 for i := 0; i < n; i++ { p1 := expanding(i, i) p2 := expanding(i, i + 1) if p1 > p2 && max < p1 { start = i - (p1 - 1) / 2 max = p1 } elseif max < p2 && p1 < p2 { start = i - (p2 - 2) / 2 max = p2 } } return s[start:start + max] }
If we define dp[i] as the longest increasing subsequence of [0, i]. Then dp[i] >= 1. And the state transition function is dp[i] = max(dp[i], dp[j] + 1) here j ∈ [0, i).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funclengthOfLIS(nums []int)int { n := len(nums) dp := make([]int, n) dp[0] = 1 result = 1 for i := 1; i < n; i++ { dp[i] = 1 for j := 0; j < i; j++ { if nums[i] > nums[j] { dp[i] = max(dp[i], dp[j] + 1) } } result = max(result, dp[i]) } return result }
funcfindLengthOfLCIS(nums []int)int { n := len(nums) dp := make([]int, n) dp[0] = 1 result := 1 for i := 1; i < n; i++ { if nums[i] > nums[i - 1] { dp[i] = dp[i - 1] + 1 } else { dp[i] = 1 } if result < dp[i] { result = dp[i] } } return result }
We can reduce the space complexity to O(1)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funcfindLengthOfLCIS(nums []int)int { n := len(nums) count := 1 result := 1 for i := 1; i < n; i++ { if nums[i] > nums[i - 1] { count++ } else { count = 1 } if count > result { result = count } } return result }
Let’s denote dp[i][j] as the maximum length of repeated subarray which ends with i and j. So we know that dp[i][j] = nums1[i] == nums[j] ? (dp[i-1][j-1] + 1) : 0
funcfindLength(nums1 []int, nums2 []int)int { m := len(nums1) n := len(nums2) dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) } result := 0 for i := 1; i <= m; i++ { for j := 1; j <= n; j++ { if nums[i - 1] == nums2[j - 1] { dp[i][j] = dp[i - 1][j - 1] + 1 } if dp[i][j] > result { result = dp[i][j] } } } return result }
Similar like above, if we denote dp[i][j] as the maximum number commen sequence which ends with i and j. So we know dp[i][j] == text1[i] == text2[j] ? dp[i-1][j-1] + 1 : max(dp[i-1][j], dp[i][j-1]).
funclongestCommonSubsequence(text1 string, text2 string)int { m := len(text1) n := len(text2) dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) } max := func(a, b int)int { if a > b { return a } return b } for i := 1; i <= m; i++ { for j := 1; j <= n; j++ { if text[i - 1] == text2[j - 1] { dp[i][j] = 1 + dp[i - 1][j - 1] } else { dp[i][j] = max(dp[i][j - 1], dp[i - 1][j]) } } } return dp[m][n] }
If you compare this one wtih LCS problem above, actually they are exactly the same. The connection lines doesn’t have intersections means the we just need to get the LCS.
funcmaxUncrossedLines(nums1 []int, nums2 []int)int { m := len(nums1) n := len(nums2) dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) } max := func(a, b int)int { if a > b { return a } return b } for i := 1; i <= m; i++ { for j := 1; j <= n; j++ { if nums1[i - 1] == nums2[j - 1] { dp[i][j] = 1 + dp[i - 1][j - 1] } else { dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]) } } } return dp[m][n] }
funcisSubsequence(s string, t string)bool { iflen(s) > len(t) { returnfalse } i := 0 for j := 0; i < len(s) && j < len(t); j++ { if s[i] == t[j] { i++ } } return i == len(s) }
b. Dynamic Programming
Let’s use dp[i][j] to denote the subsequence length ends with i and j. So the state transition function is dp[i][j] = s[i] == s[j] ? dp[i - 1][j - 1] + 1 : dp[i-1][j - 1]
funcisSubsequence(s string, t string)bool { m := len(s) n := len(t) if m > n { returnfalse } dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) } for i := 1; i <= m; i++ { for j := 1; j <= n; j++ { dp[i][j] = dp[i - 1][j - 1] if s[i - 1] == t[j - 1] { dp[i][j]++ } } } return dp[m][n] == m }
]]>
@@ -222,7 +222,7 @@
http://blog.beendless.com/2021/10/12/Stock%20Exchange%20Problems/2021-10-13T01:30:24.000Z
- 2021-10-14T05:59:42.425Z
+ 2021-11-27T08:03:05.942Z121. Best Time to Buy and Sell Stock
a. Two pointers greedy solution.
Since the profit is defined by the current price and the minimum price before today. So we can have one pointer holds the minimum price so far, and another pointer holds the max price so far, with the lowerest peak from the left and highest peak from the right, we can get the maximum profit.
1 2 3 4 5 6 7 8 9 10 11 12
funcmaxProfit(prices []int)int { profit := 0 for min, i := 100001, 0; i < len(prices); i++ { if min > prices[i] { min = prices[i] } if prices[i] - min > profit { profit = prices[i] - min } } return profit }
b. Dynamic programming
Let’s denote dp[i] as the profit we have so far, it can be two cases:
dp[i][0] We have stock so the profit we have for the first day if we buy stock is dp[0][0] = -prices[0]
dp[i][1] We don’t have stock
so the state transition function will be :
dp[i][0] = max(dp[i - 1][0], -prices[i]) the maximum value if we bought the stock in the previous day of we buy the stock on day i dp[i][1] = max(dp[i - 1][0] + prices[i], dp[i-1][1]) the maximum value if we bought the stock before and sell it today, or we don’t have stock before and won’t buy today
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
funcmaxProfit(prices []int)int { n := len(prices) dp := make([][2]int, n) dp[0][0] = -prices[0] dp[0][1] = 0 max := func(a, b int)int { if a > b { return a } return b } for i := 1; i < n; i++ { dp[i][0] = max(dp[i - 1][0], -prices[i]) dp[i][1] = max(dp[i - 1][0] + prices[i], dp[i - 1][1]) } return dp[n - 1][1] }
To gian the maximum amount of profit, we just need to accumulate the position profits if we buy it on the previous day and sell it on the next day.
1 2 3 4 5 6 7 8 9 10
funcmaxProfit(prices []int)int { result := 0 for i := 1; i < len(prices); i++ { profit := prices[i] - prices[i - 1] if profit > 0 { result += profit } } return result }
b. Peak & Valley solution
A naive approach is find the local lowest price (valley price) and sell it at the next local highest price (peak price). Then we accumulate all of those local profits.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
funcmaxProfit(prices []int)int { result := 0 peak := prices[0] valley := prices[0] length := len(prices) for i := 0; i < length - 1; { for i < length - 1 && prices[i] >= prices[i + 1] { i++ } valley = prices[i] for i < length - 1 && prices[i] <= prices[i + 1] { i++ } peak = prices[i] result += peak - valley } return result }
funcmaxProfit(prices []int)int { length := len(prices) dp := make([][2]int, length) // dp[i][0] on day i we are holding stock // dp[i][1] on day i we don't have stock dp[0] = [2]int{-prices[0], 0} max := func(a, b int)int { if a > b { return a } return b } for i := 1; i < length; i++ { // stock we got from day i - 1, or stock we are going to buy on day i dp[i][0] = max(dp[i - 1][0], dp[i - 1][1] - prices[i]) // we don't have stock from day i - 1, or we are going to sell stock we got from day i - 1 dp[i][1] = max(dp[i - 1][1], dp[i - 1][0] + prices[i]) } return max(dp[length - 1][0], dp[length - 1][1]) }
dp[i][0] we are holding the stock, so dp[i][0] = max(dp[i-1][0], dp[i-1][2] - prices[i])
dp[i][1] we are selling the stock, so dp[i][1] = dp[i - 1][0] + prices[i]
dp[i][2] we are in the cooldown period, so dp[i][2] = max(dp[i-1][2], dp[i-1][1])
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
funcmaxProfit(prices []int)int { n := len(prices) dp := make([][3]int, n) max := func(a, b int)int { if a > b { return a } return b } dp[0] = [3]int{-prices[0], 0, 0} for i := 1; i < n; i++ { dp[i][0] = max(dp[i-1][0], dp[i-1][2] - prices[i]) dp[i][1] = dp[i-1][0] + prices[i] dp[i][2] = max(dp[i-1][2], dp[i-1][1]) } return max(dp[n-1][1], dp[n-1][2]) }
We can also reduce the space complexity to O(n)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funcmaxProfit(prices []int)int { n := len(prices) max := func(a, b int)int { if a > b { return a } return b } hold := -prices[0] cooldown := 0 sold := 0 for i := 1; i < n; i++ { previousSold := sold sold = hold + prices[i] hold = max(hold, cooldown - prices[i]) cooldown = max(cooldown, previousSold) } return max(cooldown, sold) }
This is also a full knapsack problem. It looks similar to the coins change ii, but the difference here is that we need to get the permutation of the solutions instead of combination. So in this case we need to iterate the knapsack space first, then iterate the items.
1 2 3 4 5 6 7 8 9 10 11 12
funccombinationSum4(nums []int, target int)int { dp := make([]int, target + 1) dp[0] = 1 for i := 1; i <= target; i++ { for j := 0; j < len(nums); j++ { if i >= nums[j] { dp[i] += dp[i - nums[j]] } } } return dp[target] }
This is also a full knapsack problem. We can consider all squares not greater than the given n as items, n as the knapsack total capacity, and we can reuse all items to fill the knapsack. To get the minimal items, the state transition function is dp[i] = min(dp[i], dp[i - j] + 1
funcnumSquares(n int)int { squares := []int{} for i := 1; i * i <= n; i++ { squares = append(squares, i * i) } dp := make([]int, n + 1) dp[1] = 1 min := func(a, b int)int { if a < b { return a } return b } for i := 2; i <= n; i++ { for j := 0; j < len(squares); j++ { if i >= squares[j] { dp[i] = min(dp[i], dp[i - squares[j]] + 1) // here we are counting the number, so we increase the items count. } } } return dp[n] }
We can also try to use backtracking to resolve this problem.
funcnumSquares(n int)int { squares := []int{} for i := 1; i * i <= n; i++ { squares = append(squares, i * i) } hash := make(map[int]int) min := func(a, b int)int { if a < b { return a } return b } var backtracking func(int)int backtracking = func(index int)int { if v, ok := hash[index]; ok { return v } if index == 0 { return0 } count := n + 1 for j := 0; j < len(squares); j++ { if index >= squares[j] { count = min(count, backtracking(index - squares[j]) + 1) } } hash[index] = count return count } return backtracking(n) }
Obviously, an empty string can be part of any string. So if we denote dp[i] as s[:i] which can be constructed by the worddict, dp[0] is true. And the state transition function can be dp[i] = dp[i - len(words[j]] && words[j] == s[i - len(words[j]):i]. We can consider this as a full knapsack problem. Words can be consider as items, and the s can be considered as a special knapsack.
1 2 3 4 5 6 7 8 9 10 11 12
funcwordBreak(s string, wordDict []string)bool { dp := make([]bool, len(s) + 1) dp[0] = true for i := 1; i <= len(s); i++ { for j := 0; j < len(wordDict); j++ { if i >= wordDict[j] { dp[i] = dp[i] || (dp[i - len(wordDict[j])] && wordDict[j] == s[i - len(wordDict[j]):i]) } } } return dp[len(s)] }
We can also optimize it with a hashmap to store all words
1 2 3 4 5 6 7 8 9 10 11 12 13 14
funcwordBreak(s string, wordDict []string)bool { dp := make([]bool, len(s) + 1) hash := make(map[string]bool) for _, word := range wordDict { hash[word] = true } dp[0] = true for i := 1; i <= len(s); i++ { for j := 0; j < i; j++ { dp[i] = dp[i] || (dp[j] && hash[s[j:i]]) } } return dp[len(s)] }
funcfindTargetSumWays(nums []int, target int)int { count := 0 var dfs func(int ,int) dfs = func(index, left int) { if index == len(nums) { if left == 0 { count++ } } else { dfs(index + 1, left + nums[index]) dfs(index + 1, left - nums[index]) } } dfs(0, target) return count }
b. Dynamic Programming
dp[i][j] refers to the number of assignments which can lead to a sum of j up to the ith items in the Array. We can get the state transition function: dp[i][j] = dp[i - 1][j + nums[i]] + dp[i - 1][j - nums[i]]
funcfindTargetSumWays(nums []int, target int)int { sum := 0 for _, num := range nums { sum += num } if sum < target || -sum > target || (sum + target) % 2 != 0 { return0 } n := len(nums) dp := make([][]int, n) for i := 0; i < n; i++ { dp[i] = make([]int, 2 * sum + 1) } dp[0][sum + nums[0]] = 1 dp[0][sum - nums[0]] += 1 for i := 1; i < n; i++ { for j := -sum; j <= sum; j++ { if j + nums[i] < sum + 1 { dp[i][j + sum + nums[i]] += dp[i - 1][j + sum] } if j + sum - nums[i] >= 0 { dp[i][j + sum - nums[i]] += dp[i - 1][j + sum] } } } return dp[n - 1][sum + target] }
c. Knapsack solution (subset sum)
Based on the problem description, we will have two subsets. One with positive symbol (s1) and another one with negative symbol (s2). So s1 + s2 = sum and s1 - s2 = target. We can convert this problem to a 0-1 knapsack problem — find a subset which subtotal is s1 = (sum + target) / 2.
funcrob(nums []int)int { n := len(nums) dp := make([]int, n + 1) dp[1] = nums[0] max := func(a, b int)int { if a > b { return a } return b } for i := 2; i <= n; i++ { dp[i] = max(dp[i - 1], dp[i - 2] + nums[i - 1]) } return dp[n] }
/** * Definition for a binary tree node. * type TreeNode struct { * Val int * Left *TreeNode * Right *TreeNode * } */ funcrob(root *TreeNode)int { cache := make(map[*TreeNode]int) var dfs func(*TreeNode)int dfs = func(root *TreeNode)int { if root == nil { return0 } if value, ok := cache[root]; ok { return value } rootValue := root.Val if root.Left != nil { rootValue += dfs(root.Left.Left) + dfs(root.Left.Right) } if root.Right != nil { rootValue += dfs(root.Right.Left) + dfs(root.Right.Right) } childValue := dfs(root.Left) + dfs(root.Right) maxValue := childValue if rootValue > childValue { maxValue = rootValue } cache[root] = maxValue return maxValue } return dfs(root) }
b. Dynamic Programming to cache more calculation results.
Since each node has two values, with or without its own value. The above one only caches the maxValue, if we cache both of those in an array, it will speed up the calculating.
funcrob(root *TreeNode)int { cache := make(map[*TreeNode][2]int) var dfs func(*TreeNode) [2]int max := func(a, b int)int { if a > b { return a } return b } dfs = func(root *TreeNode) [2]int { if root == nil { return [2]int{0, 0} } if value, ok := cache[root]; ok { return value } rootValue := root.Val leftValue := dfs(root.Left) rightValue := dfs(root.Right) childValue := max(leftValue[0], leftValue[1]) + max(rightValue[0], rightValue[1]) cache[root] = [2]int{rootValue + leftValue[1] + rightValue[1], childValue} return cache[root] } value := dfs(root) return max(value[0], value[1]) }
]]>
@@ -309,12 +309,12 @@
-
-
+
+
@@ -322,7 +322,7 @@
http://blog.beendless.com/2021/10/07/Knapsack%20Problem%20I/2021-10-08T05:12:24.000Z
- 2021-10-10T06:57:34.131Z
+ 2021-11-27T08:03:05.920ZThe knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
We can quickly get the result if the sum is an odd number. If the sum is an even number s, it means we need to find some items in the slice which can get a sum to s / 2. Now the problem becomes a classical 0-1 knapsack problem.
a. Classical 0-1 knapsack solution
Denote dp[i][j] as if we can construct a sum j from the first i items. So the state transition function is dp[i][j] = dp[i - 1][j] || dp[i - 1][j - nums[i]]. The value is determined by:
If we don’t use the current item, we need to check if we can construct the target j by the first i - 1 items: dp[i-1][j]
If we use the current item, we need to check if we can construct the target j - nums[i] by the first i - 1 items: dp[i - 1][j - nums[i]]
funccanPartition(nums []int)bool { sum := 0 for _, num := range nums { sum += num } if sum % 2 == 1 { returnfalse } target := sum / 2 n := len(nums) dp := make([][]bool, n + 1) for i := 0; i <= n; i++ { dp[i] = make([]bool, target + 1) dp[i][0] = true } for i := 1; i <= n; i++ { for j := 1; j <= target; j++ { if j >= nums[i - 1] { dp[i][j] = dp[i - 1][j] || dp[i - 1][j - nums[i - 1]] } else { dp[i][j] = dp[i - 1][j] } } } return dp[n][target] }
b. Rolling array solution
Based on the state transition function above, we can simplifiy it by using a 1D array. dp[i] = dp[i] || dp[i - nums[i]]. Note since this time in the 1D array, the left part has side effect to the right side, so we need to iterate the array from right to left.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funccanPartition(nums []int)bool { sum := 0 for _, num := range nums { sum += num } if sum % 2 == 1 { returnfalse } target := sum / 2 n := len(nums) dp := make([]bool, target + 1) dp[0] = true for i := 0; i < n; i++ { for j := target; j >= nums[i]; j-- { dp[j] = dp[j] || dp[j - nums[i]] } } return dp[target] }
Note: the two solutions above are using bool value as dp array value type, we can also use int to store the sum we can get. So the state transition function will be dp[j] = max(dp[j], dp[j - nums[i]] + nums[i]). At the end, we just need to veryfy dp[target] == target.
To get the minimum result, we need to try our best to split the stones into two similar weight subsets. Let’s denote the sum as the total weight of all stones, so we need to find target = sum/2 to get the minimum sum - 2 * target
Each items have two properties (1 amount and 0 amount) and we need to get the maximum sum of a subset based on the two dememsion restrictions (total 1 amount n and total 0 amount m). It can be considered as a classical two dememsion 0-1 knapsack problem. So the state transition function is dp[i][j] = max(dp[i][j], dp[i - zeros][j - ones] + 1) (Note, ideally we need 3D array to solve this problem, but based on the state transition function, we can reduce to a 2D rolling array with reverse for-loop).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funcfindMaxForm(strs []string, m int, n int)int { dp := make([][]int, m + 1) for i := 0; i <= m; i++ { dp[i] = make([]int, n + 1) } for _, str := range strs { ones := strings.Count("1") zeros := strings.Count("0") for i := m; i >= zeros; i-- { for j := n; j >= ones; j-- { pickme := dp[i - zeros][j - ones] + 1 if dp[i][j] < pickme { dp[i][j] = pickme } } } } return dp[m][j] }
This is a classical full knapsack problem. The state transition function is dp[i] = min(dp[i], dp[i - coins[i]] + 1). Since we need to get the minimal number, so the initial value needs to be an integer which is out of the scope (except dp[0] which is 0). We can either use math.MaxInt32 or amount + 1
funccoinChange(coins []int, amount int)int { dp := make([]int, amount + 1) dp[0] = 0 for i := 0; i <= amount; i++ { dp[i] = math.MaxInt32 } min := func(a, b int)int { if a < b { return a } return b } for i := 1; i <= amount; i++ { for j := 0; j < len(coins); j++ { if i >= coins[j] && dp[i - coins[j]] != math.MaxInt32 { // If we pick one current coin and there's some calculated solution to the state dp[i - coins[j]] which is not the initial value, then we have a valid solution dp[i] = min(dp[i], dp[i - coins[j]] + 1) } } } if dp[amount] == math.MaxInt32 { return-1 } return dp[amount] }
This is is also a full knapsack problem. The difference between this and the above is that we need to get the amount of combinations. So the state transition function is dp[i] += dp[i - coins[j]]. Since here each coin change solution is a combination problem instead of permutation problem, we can only iterate the coins first. If we iterate the knapsack space first, we will get the duplicated result like [[coins[0], coins[1]], [coins[1], coins[0]]].
Based on the problems above, we can get a knapsack problem solution template
0-1 knapsack template
1 2 3 4 5 6 7 8 9
dp := make([]int, amount + 1) dp[0] = // Initial value based on the problem for i := 0; i <= len(nums); i++ { for j := amount; j >= nums[i]; j-- { // state transition function // dp[j] = dp[j] || dp[j - nums[i]] // dp[j] = max(dp[j], dp[j - nums[i]] + nums[i]) } }
full knapsack template to get the combination of items
1 2 3 4 5 6 7 8 9 10
dp := make([]int, amount + 1) dp[0] = // Initial value based on the problem // dp[i] = // Initial value based on the problem, could be 0 for total solutions counting or min/max value to get the maximum/minimum expectation for i := 0; i < len(nums); i++ { for j := nums[i]; j <= amount; j++ { // state transition function // dp[j] += dp[j - nums[i]] // dp[j] = min(dp[j], dp[j - nums[i]] + nums[i]) } }
full knapsack template to get the permutation of items
1 2 3 4 5 6 7 8 9 10
dp := make([]int, amount + 1) dp[0] = // Initial value based on the problem // dp[i] = // Initial value based on the problem, could be 0 for total solutions counting or min/max value to get the maximum/minimum expectation for j := 0; j <= amount; j++ { for i := 0; i < len(nums); i++ { // state transition function // dp[j] += dp[j - nums[i]] // dp[j] = min(dp[j], dp[j - nums[i]] + nums[i]) } }
]]>
@@ -354,7 +354,7 @@
http://blog.beendless.com/2021/10/06/Dynamic%20Programming%20I/2021-10-07T05:00:25.000Z
- 2021-10-10T07:08:28.566Z
+ 2021-11-27T08:03:05.889ZDynamic Programming (commonly referred to as DP) is an algorithmic technique for solving a problem by recursively breaking it down into simpler subproblems and using the fact that the optimal solution to the overall problem depends upon the optimal solution to it’s individual subproblems. Here is an interesting Quora question How should I explain dynamic programming to a 4-year-old?.
funcclimbStairs(n int)int { if n < 3 { return n } dp := make([]int, n + 1) dp[0] = 0 dp[1] = 1 dp[2] = 2 for i := 3; i <= n; i++ { dp[i] = dp[i - 1] + dp[i - 2] } return dp[n] }
If given we can climb the stairs from 1 ~ m steps each time, how to solve this problem? It becomes a full knapsack problem now. And the state transition function is dp[i] += dp[i - j], here 2 is the special case.
1 2 3 4 5 6 7 8 9 10 11 12 13
funcclimbStairs(n int)int { steps := []int{1, 2} dp := make([]int, n + 1) dp[0] = 1 for i := 1; i <= n; i++ { for j := 0; j < len(steps); j++ { if i >= steps[j] { dp[i] += dp[i - steps[j]] } } } return dp[n] }
Denote dp[i] to the cost we want to step away from stair ith, so the state transition function: min(dp[i - 1], dp[i - 2]) + cost[i].
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funcminCostClimbingStairs(cost []int)int { n := len(cost) dp := make([]int, n) min := func(a, b int)int { if a < b { return a } return b } dp[0] = cost[0] dp[1] = cost[1] for i := 2; i < n; i++ { dp[i] = min(dp[i - 1], dp[i - 2]) + cost[i] } return min(dp[n - 1], dp[n - 2]) // To reach to stair n, we can step away from n - 1 or n - 2 }
Another way to think about it. If we denote dp[i] as the cost to reach to ith stair, the state transition function is dp[n] = min(dp[i - 1] + cost[i - 1], dp[i - 2] + cost[i - 2])
1 2 3 4 5 6 7 8 9 10 11 12 13 14
funcminCostClimbingStairs(cost []int)int { n := len(cost) dp := make([]int, n + 1) min := func(a, b int)int { if a < b { return a } return b } for i := 2; i <= n; i++ { dp[i] = min(dp[i - 1] + cost[i - 1], dp[i - 2] + cost[i - 2]) } return dp[n] }
It’s easy to get the state transition function dp[i][j] = dp[i - 1][j] + dp[i][j - 1] . Note for the special case first line and first row, the value is 1.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funcuniquePaths(m int, n int)int { dp := make([][]int, m) for i := 0; i < m; i++ { dp[i] = make([]int, n) dp[i][0] = 1 } for i := 0; i < n; i++ { dp[0][i] = 1 } for i := 1; i < m; i++ { for j := 1; j < n; j++ { dp[i][j] = dp[i - 1][j] + dp[i][j - 1] } } return dp[m - 1][n - 1] }
Based on the state transition function, dp[i][j] is defined only by two values, so we can optimize the space complexity from O(m * n) to O(n) by using a new state transition function dp[j] = dp[j] + dp[j - 1]
1 2 3 4 5 6 7 8 9 10 11 12
funcuniquePaths(m int, n int)int { dp := make([]int, n) for i := 0; i < n; i++ { dp[i] = 1 } for i := 1; i < m; i++ { for j := 1; j < n; j++ { dp[j] += dp[j - 1] } } return dp[n - 1] }
Based on the BST specs, we can get the state transition function dp[i] = dp[j] * dp[i - j - 1], here dp[i] denotes when i is set to the root node, we have j nodes on left child and i - j - 1 on right child. Note here the base case is 1. If there’s 0 nodes on left tree, it means we can construct the left tree in one uniq way.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
funcnumTrees(n int)int { if n < 3 { return n } dp := make([]int, n + 1) dp[0] = 1 dp[1] = 1 dp[2] = 2 for i := 3; i <= n; i++ { for j := 0; j < i; j++ { dp[i] += dp[j] & dp[i - j - 1] } } return dp[n] }
Similar to other sgement related problems. The first thing we need to do is to sort the slice. Once we have a sorted segment slice, we can iterate over all items and merge them. Note there is one edge case we need to cover after the iteration, either we merged all segments into one or the last one can’t be merged into the previous segment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
funcmerge(intervals [][]int) [][]int { sort.Slice(intervals, func(a, b int)bool { return intervals[a][0] < intervals[b][0] }) result := []int{} start := intervals[0][0] end := intervals[0][1] for i := 0; i < len(intervals); i++ { if end < intervals[i][0] { result = append(result, []int{start, end}) start = intervals[i][0] end = intervals[i][1] } elseif end < intervals[i][1] { end = intervals[i][1] } } result = append(result, []int{start, end}) return result }
]]>
@@ -416,7 +416,7 @@
http://blog.beendless.com/2021/10/05/Greedy%20%20Problems%20II/2021-10-05T17:25:24.000Z
- 2021-10-07T04:35:57.648Z
+ 2021-11-27T08:03:05.905Z1005. Maximize Sum Of Array After K Negations
To get a maximum sum, we need to convert as many negative numbers to positive ones. If there is still an odd times of converting number left, we just need to convert the smallest positive number to a negative one
funclargestSumAfterKNegations(nums []int, k int)int { sort.Ints(nums) i := 0 for i < k && i < len(nums) { if nums[i] < 0 { nums[i] = -nums[i] i++ } else { break } } if i < k && (k - i) % 2 == 1 { sort.Ints(nums) nums[0] = -nums[0] } result := 0 for _, num := range nums { result += num } return result }
Several cases: 1) If the total amount of gas is less than the total amount of the cost, we can’t make a round trip 2) Given an arbitrary start point i, and at i we have gas[i] in the tank. Let’s start at this point and accumulate the gas we left in the tank. If at point i + k the acculation is negative, it means we can’t reach from any point in beteeen [i, i + k - 1) to point k. So we can quickly start from i + k + 1 instead of i + 1.
So the local optimal solution for a given start point is from this starting point, we are making a round trip, the debet won’t be negative. If it’s negative, we need to start after the negative point. This can lead to a global optimal solution.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
funccanCompleteCircuit(gas []int, cost []int)int { result := 0 sum := 0 debet := 0 for i := 0; i < len(gas); i++ { sum += gas[i] - cost[i] debet += gas[i] - cost[i] if debet < 0 { debet = 0 result = i + 1 } } if sum < 0 { return-1 } return result }
Since the rating only have effect to neighbour candy distribution. We can start from one end to distribution the candy once, to make sure all children are happy when looking to their right. Then we make another round from the other end.
funclemonadeChange(bills []int)bool { five := 0 ten := 0 for i := 0; i < len(bills); i++ { switch bills[i] { case5: five++ case10: ten++ if five >= 1 { five -- } else { returnfalse } case20: if ten == 0 { if five < 3 { returnfalse } five -= 3 } elseif ten > 0 { ten-- if five == 0 { returnfalse } five-- } } } returntrue }
Since there are to demensions, and the demension k depends on h, the idea is we sort the given slice by h as the primary order, k as the secondary order. After that, we use the insert sorting algorithm to insert all slice items one by one based on the k value to a new slice.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
funcreconstructQueue(people [][]int) [][]int { result := [][]int{} sort.Slice(people, func(a, b int)bool { // order by h desc, order by k asc if people[a][0] == people[b][0] { return people[a][1] < people[b][1] } return people[a][0] > people[b][0] }) for _, p := range people { result = append(result, p) copy(result[p[1] + 1:], result[p[1]:]) result[p[1]] = p } return result }
A greedy solution is we choose the shoot point which is the most line segments overlaped as a local optimal, it also leads to a global optimal solution. For example, we have four segments as below. If we sort them by the start point, we can easily get a first point should be between e ~ b. It means we iterate all segments, if the current segment’s start point is no great than the previous one’s end point, we can merge those two by reseting the current one’s end point to the minimum number between itself and the previous one’s end point.
1 2 3 4 5 6 7 8
|--------------| a b |-----------------| c d |---------| e f |---------| g h
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
funcfindMinArrowShots(points [][]int)int { sort.Slice(points, func(a, b int)bool { return points[a][0] < points[b][0] }) result := 1 for i := 1; i < len(points); i++ { if points[i][0] > points[i - 1][1] { result++ } elseif points[i][1] > points[i - 1][1] { points[i][1] = points[i - 1][1] } } return result
This one is a similar problem as #452. An intuition to solve this kind of problem is sort if first. Since all line segments have two points, we have two choices to sort it. The local optimal to find the interval is the end of current segment should have a distance between the next one’s start point. With this in mind, we can quickly get the total of intervals. So if we sort by the end point, we can iterate from left to right. Otherwise, we need to reverse the iteration order.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
funceraseOverlapIntervals(intervals [][]int)int { sort.Slice(intervals, func(a, b int)bool { return intervals[a][1] < intervals[b][1] }) end := intervals[0][1] count := 1 for i := 1; i < len(intervals); i++ { if end <= intervals[i][0] { count++ end = intervals[i][1] } } returnlen(intervals) - count }
At each step, a greedy jump can give us the local optimal furthest solution. Our global solution can be found in if we always take the greedy jump.
1 2 3 4 5 6 7 8 9 10 11 12 13
funccanJump(nums []int)bool { distance := 0 length := len(nums) for i := 0; i <= distance; i++ { // Note: here we use distance to control which items we can check if distance < i + nums[i] { distance = i + nums[i] } if distance >= length - 1 { returntrue } } returnfalse }
b. Dynamic programming
1 2 3 4 5 6 7 8 9 10 11 12 13
funccanJump(nums []int)bool { length := len(nums) dp := make([]int, length) dp[0] = true for i := 0; i < length; i++ { if dp[i] { for j := 1; j <= nums[i] && i + j < length; j++ { dp[i + j] = true } } } return dp[length - 1] }
Each time, we will jump to a position which can make us future jumping even further. And each jump will resolve to a coverage range as below, so the total jump steps will be the sum of times we reach to the edge of the coverge range.
1 2 3 4 5 6
| 2 | 3 | 1 | 1 | 4 | 5 | 1 | 2 |
|---------->| |-------------->|
|-------------->|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
funcjump(nums []int)int { result := 0 length := len(nums) end := 0 max := 0 for i := 0; i < length - 1; i++{ if max < nums[i] + i { // Get the next coverage edge max = nums[i] + i } if i == end { // Switch to the next range with a jump end = max result++ } } return result }
b. Dynamic programming
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
funcjump(nums []int)int { dp := make([]int, len(nums)) dp[0] = 0 for i := 1; i < len(nums); i++ { dp[i] = -1 } for i := 0; i < len(nums); i ++ { for j := 1; j <= nums[i] && i + j < len(nums); j++ { if dp[i + j] == -1 { dp[i + j] = dp[i] + 1 } elseif dp[i + j] > dp[i] + 1 { dp[i + j] = dp[i] + 1 } } } return dp[len(nums) - 1] }
The keypoint here is during the traversal of the recursive, we don’t get lost in a infinite loop. So we need to remember all of the visted nodes in a hash.
This is a classical tree traversal with BFS problem. From each array index we can jump to multiple elements simoteniously, those nextstep elements can be consiered as the tree node’s child nodes. So we jump to all nodes’s children nodes at once which can be considered as one jump (BFS). Since the same value of the nodes can jump to each other, we have to mark the nodes values which have been pushed to the queue to make sure we don’t push the same nodes back to the queue (even we have a visited flag, we can easily run out of memory without another same number flag under this edge case if we have 1000 same value nodes in the slice).
funcminJumps(arr []int)int { length := len(arr) if length < 3 { return length - 1 } jumpIndexes := make(map[int][]int) for i, v := range arr { jumpIndexes[v] = append(jumpIndexes[v], i) } queue := make([]int, 1) queue[0] = 0 result := 0 visited := make([]bool, length) sameNumberVisited := make(map[int]bool) // Have a flag is one thing, another solution is to remove the sameNumber key from the jumpIndexes hashmap. n := len(queue) for n > 0 { for i := 0; i < n; i++ { index := queue[i] if index == length - 1 { return result } if !visited[index] { visited[index] = true if index - 1 >= 0 && !visited[index - 1]{ queue = append(queue, index - 1) } if index + 1 < length && !visited[index + 1] { queue = append(queue, index + 1) } if !sameNumberVisited[arr[index]] { sameNumberVisited[arr[index]] = true for _, v := range jumpIndexes[arr[index]] { if !visited[v] { queue = append(queue, v) } } } } } queue = queue[n:] n = len(queue) result++ } return result }
To resolve this problem, we need to understand you can only jump from index i to index j if arr[i] > arr[j] and arr[i] > arr[k] for all indices k between i and j (More formally min(i, j) < k < max(i, j)).. Let’s say we stand at index i, and jumping from i - 1, i + 1 until i - d, i + d inside of for loops. We need to break the loop if we find a k between [i-d, i) or (i, i + d] which makes arr[k] >= arr[i].
funcmaxJumps(arr []int, d int)int { result := 0 length := len(arr) dp := make([]int, length) max := func(a, b int)int { if a > b { return a } return b } min := func(a, b int)int { if a > b { return b } return a } var jump func(int)int jump = func(index int)int { if dp[index] == 0 { dp[index] = 1 for i := index - 1; i >= max(0, index - d) && arr[i] < arr[index]; i-- { dp[index] = max(dp[index], jump(i) + 1) } for i := index + 1; i <= min(length - 1, index + d) && arr[i] < arr[index]; i++ { dp[index] = max(dp[index], jump(i) + 1) } } return dp[index] } for i := 0; i < length; i++ { dp[i] = jump(i) result = max(dp[i], result) } return result }
A naive idea is to iterate over all nodes, so the worse time complexity could be O(n * k) [K = maxJump - minJump] which most likely will cause a TLE issue. This one can be considered as a classic sliding window maximum problem. Since dp[i] = nums[i] + max(dp[i - k], … , dp[i - 1]). We just need to maintain the maximum dp value in the sliding window during the iteration.
1 2 3 4 5 6 7 8
01010101010101011111
|---------(i)------| |----->| i-k i-1 |----------(i + 1)------| |----->| i-k+1 i
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
funcmaxResult(nums []int, k int)int { length := len(nums) queue := []int{0} // stores the dp indexes of the sliding window items dp := make([]int, length) dp[0] = nums[0] for i := 1; i < length; i++ { maxSumIndex := queue[0] dp[i] = nums[i] + dp[maxSumIndex] forlen(queue) > 0 && dp[queue[len(queue) - 1]] <= dp[i] { // sliding window queue contains all values in a desending order queue = queue[:len(queue) - 1] } forlen(queue) > 0 && i - queue[0] >= k { // remove the index which is going to out of the window queue = queue[1:] } queue = append(queue, dp[i]) } return dp[length - 1] }
Same as the above one, a naive dp will get a TLE. A keypoint to solve this problem is we need to avoid the duplicated node visiting. One way is we can use a hashmap to note all visited elements. Another method is that we can bypass the overlap like below:
The first jump range is 1 ~ 2, the second is 3 ~ 4, here the range 3 ~ 2 doesn’t need to be visited again. With this in mind, we can use tree-like traversal solution with queue or two pointers sliding window to fix this issue.
funccanReach(s string, minJump int, maxJump int)bool { queue := []int{0} length := len(s) visited := make(map[int]bool) visited[0] = true edge := 0 min := func(a, b int)int { if a > b { return b } return a } max := func(a, b int)int { if a > b { return a } return b } forlen(queue) > 0 { index := queue[0] if index == length - 1 { returntrue } queue = queue[1:] left := index + minJump right := min(length - 1, index + maxJump) for i := max(edge + 1, left); i <= right; i++ { if s[i] == '0' && !visited[i]{ visited[i] = true queue = append(queue, i) } } edge = right } returnfalse }
funccanReach(s string, minJump int, maxJump int)bool { length := len(s) if s[length - 1] == '0' { min := func(a, b int)int { if a > b { return b } return a } max := func(a, b int)int { if a > b { return a } return b } canVisit := make(map[int]bool) canVisit[0] = true edge := 0 for i := 0; i <= edge && i < length; i++ { if canVisit[i] { left := i + minJump right := min(length - 1, i + maxJump) for j := max(left, edge + 1); j <= right; j++ { if s[j] == '0' { canVisit[j] = true if j == length - 1 { returntrue } } } edge = right } } } returnfalse }
]]>
@@ -469,17 +469,17 @@
-
+
+
+
-
-
-
+
@@ -490,7 +490,7 @@
http://blog.beendless.com/2021/10/03/Greedy%20%20Problems%20I/2021-10-03T17:25:24.000Z
- 2021-10-04T05:04:44.655Z
+ 2021-11-27T08:03:05.901ZGreedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to global solution are best fit for Greedy.
If every child is content, then all children are content. So local optimal leads to a global optimal solution. We can use greedy. Now we want to make more child happy, we can use greedy algorithm to give the children whose gratitude is lower first.
1 2 3 4 5 6 7 8 9 10 11 12 13
import"sort" funcfindContentChildren(g []int, s []int)int { sort.Ints(g) sort.Ints(s) result := 0 for i, j := 0, 0; i < len(g) && j < len(s); j++{ if g[i] <= s[i] { i++ result++ } } return result }
If all connected neighbour nodes are wiggle, the whole slice will be wiggle, it means we can use greedy algorithm. We can also draw the wave with all elements in the slice, our target is to calculate how many peaks (positive/negative) in the wave, here the peak is elements which left diff and right diff have different symbols.
1 2 3 4 5 6 7 8 9 10 11 12 13
funcwiggleMaxLength(nums []int)int { result := 1 previous := 0 current := 0 for i := 0; i < len(nums) - 1; i++ { current = nums[i + 1] - nums[i] if (current > 0 && previous <= 0) || (current < 0 && previous >= 0) { previous = current result++ } } return result }
b. Dynamic programming solution. Since we want to get the maximum number, the first algorithm in our mind will be dynamic programming.
A naive solution will be using a two level nested for loop to go through all combinations of subsets.
a. Greedy implementation
The idea of using greedy algorithm is when calculating the local maximum sum, if the current sum of all previous elements are negative, we will reset the start point as the current element.
1 2 3 4 5 6 7 8 9 10 11 12 13
funcmaxSubArray(nums []int)int { result := nums[0] sum := 0 for i := 0; i < len(nums); i++ { sum += nums[i] if sum > result { result = sum } if sum < 0 { sum = 0 } } }
funcmaxSubArray(nums []int)int { getCrossMiddleMaxSubArray := func(start, end, middle int)int { left, right := 0, 0 if middle > 0 { sum := 0 for i := middle - 1; i >= start; i-- { sum += nums[i] if sum > left { left = sum } } } if middle < end { sum := 0 for i := middle + 1; i <= end; i++ { sum += nums[i] if sum > right { right = sum } } } return left + nums[middle] + right } max := func(a, b int)int { if a > b { return a } return b } var getMaxSubArray func(int, int)int getMaxSubArray = func(start, end int)int { if start == end { return nums[start] } mid := (start + end) / 2 left := getMaxSubArray(start, mid) right := getMaxSubArray(mid + 1, end) middle := getCrossMiddleMaxSubArray(start, end, mid) return max(max(left, middle), right) } return getMaxSubArray(0, len(nums) - 1) }
To gian the maximum amount of profit, we just need to accumulate the position profits if we buy it on the previous day and sell it on the next day.
1 2 3 4 5 6 7 8 9 10
funcmaxProfit(prices []int)int { result := 0 for i := 1; i < len(prices); i++ { profit := prices[i] - prices[i - 1] if profit > 0 { result += profit } } return result }
b. Peak & Valley solution
A naive approach is find the local lowest price (valley price) and sell it at the next local highest price (peak price). Then we accumulate all of those local profits.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
funcmaxProfit(prices []int)int { result := 0 peak := prices[0] valley := prices[0] length := len(prices) for i := 0; i < length - 1; { for i < length - 1 && prices[i] >= prices[i + 1] { i++ } valley = prices[i] for i < length - 1 && prices[i] <= prices[i + 1] { i++ } peak = prices[i] result += peak - valley } return result }
funcmaxProfit(prices []int)int { length := len(prices) dp := make([][2]int, length) // dp[i][0] on day i we are holding stock // dp[i][1] on day i we don't have stock dp[0] = [2]int{-prices[0], 0} max := func(a, b int)int { if a > b { return a } return b } for i := 1; i < length; i++ { // stock we got from day i - 1, or stock we are going to buy on day i dp[i][0] = max(dp[i - 1][0], dp[i - 1][1] - prices[i]) // we don't have stock from day i - 1, or we are going to sell stock we got from day i - 1 dp[i][1] = max(dp[i - 1][1], dp[i - 1][0] + prices[i]) } return max(dp[length - 1][0], dp[length - 1][1]) }
]]>
@@ -520,7 +520,7 @@
http://blog.beendless.com/2021/10/01/Backtracking%20-%20Chessboard/2021-10-02T05:26:24.000Z
- 2021-10-08T17:23:38.911Z
+ 2021-11-27T08:03:05.877ZBacktracking can also be used to solve chessboard problems.
A classical backtracking problem. The backtracking state transition function is backtracking(i, j) = backtracking(i + 1, j) + backtracking(i - 1, j) + backtracking(i, j - 1) + backtracking(i, j + 1), also we need to keep tracking the global state of the grid.
funccanPartitionKSubsets(nums []int, k int)bool { sum := 0 for _, num := range nums { sum += num } if sum % k != 0 { returnfalse } sort.Slice(nums, func(a, b int)bool { return a > b }) target := sum / k n := len(nums) if nums[n - 1] > target { returnfalse } for n > 0 && nums[n - 1] == target { n-- k-- } subsets := make([]int, k) var backtracking func(int)bool backtracking = func(index int)bool { if index == n { for _, subset := range subsets { if subset != target { returnfalse } } returntrue } for i := 0; i < k; i++ { if subsets[i] + nums[index] <= target { subsets[i] += nums[index] if backtracking(index + 1) { returntrue } subsets[i] -= nums[index] } } returnfalse } return backtracking(0) }
Another faster backtracking solution is to accumulate the successful partition.
funccanPartitionKSubsets(nums []int, k int)bool { sum := 0 for _, num := range nums { sum += num } if sum % k != 0 { returnfalse } target := sum / k n := len(nums) sort.Slice(nums, func(a, b int)bool { // Sort the slice by desc with a greedy way, so we can quickly get the target number return a > b }) if nums[n - 1] > target { returnfalse } for n > 0 && nums[n - 1] == target { n-- k-- } visited := make([]bool, n) var backtracking func(int, int, int)bool backtracking = func(index, partition, acc int)bool { if partition == k { returntrue } if acc == target { return backtracking(0, partition + 1, 0) } for i := index; i < n; i++ { if !visited[i] { visited[i] = true if backtracking(i + 1, partition, acc + nums[i]) { returntrue } visited[i] = false } } returnfalse } return backtracking(0, 0, 0) }
]]>
@@ -551,7 +551,7 @@
http://blog.beendless.com/2021/10/01/Backtracking%20-%20Subsets/2021-10-02T03:05:24.000Z
- 2021-10-03T05:22:26.977Z
+ 2021-11-27T08:03:05.880ZBacktracking can also help us to get all subsets of a given slice. If Combination and Partitioning problems can be converted to get root-to-leaf paths during a tree DFS traversal, Subsets can be treated as getting all root-to-node paths during a tree DFS traversal.
It’s similar to #78, the only difference is we can’t have duplicated subsets, which means we can’t pick the same value at the same tree level during traversal.
funcfindSubsequences(nums []int) [][]int { result := [][]int{} path := []int{} length := len(nums) var backtracking func(int) backtracking = func(index int) { iflen(apth) == length { return } used := make(map[int]bool) for i := index; i < length; i++ { if (len(path) > 0 && path[len(path) - 1] > nums[i]) || used[nums[i]] { continue } used[nums[i]] = true path = append(path, nums[i]) iflen(path) >= 2 { temp := make([]int, len(path)) copy(temp, path) result = append(result, temp) } backtracking(i + 1) path = path[:len(path) - 1] } } backtracking(0) return result }
]]>
@@ -581,7 +581,7 @@
http://blog.beendless.com/2021/10/01/Backtracking%20-%20Partitioning/2021-10-02T01:05:24.000Z
- 2021-10-02T01:07:37.589Z
+ 2021-11-27T08:03:05.879ZPartitioning is another classical problem which can be solved with backtracking algorithm.
funcrestoreIpAddresses(s string) []string { result := []string{} path := []string{} length := len(s) var backtracking func(int) backtracking = func(index int) { if index > length { return } elseiflen(path) == 4 { if index == length { result = append(result, strings.Join(path, ".")) } return } for i := index; i < length; i++ { if i - index <= 2 { num, _ := strconv.Atoi(s[index:i + 1]) if (i - index == 2 && num < 100) || (i - index == 1 && num < 10) { continue } if num < 256 { path = append(path, s[index:i + 1]) backtracking(i + 1) path = path[:len(path) - 1] } } } } backtracking(0) return result }
]]>
@@ -612,7 +612,7 @@
http://blog.beendless.com/2021/10/01/Backtracking%20-%20Combinations/2021-10-01T20:35:24.000Z
- 2021-10-01T22:19:19.821Z
+ 2021-11-27T08:03:05.878ZBacktracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem at any point of time (by time, here, is referred to the time elapsed till reaching any level of the search tree). Usually we can consider backtracking as DFS recursively traversal.
Backtracking template
1 2 3 4 5 6 7 8 9 10 11
funcbacktracking(...args) { if stop_condition { // Update the result set return } for i := range nodes_in_current_layer(...args) { // Down to next layer backtracking(...args, i + 1) // Go back to the upper layer } }
funccombine(n, k int) [][]int { result := [][]int{} path := []int{} var backtracking func(int, int, int) backtracking = func(n, k, index int) { iflen(path) == k { temp := make([]int, len(path)) copy(temp, path) result = append(result, temp) return } // For example, given n = 4, k = 3, if path is empty, n - (k - 0) + 1 = 2 means the last valid index can be 2 for i := index; i <= n - (k - len(path)) + 1; i++ { path = append(path, i) backtracking(n, k, i + 1) path = path[:len(path) - 1] } } backtracking(n, k, 1) return result }
Since we can convert a combination backtracking problem to a DFS traversal problem, if we don’t want to have the duplicated combination result item, it means we can’t pick duplicated nodes from the same layer of a tree. According to the backtracking template, in side of the backtracking for-loop we are handling the same layer logic (push/pop). At this point, if the given candidates is a sorted slice, we just need to compare if the previous element equals to the current element in the same layer.
\ No newline at end of file
diff --git a/docs/index.html b/docs/index.html
index 612d32526..54d3f1b10 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -1,11 +1,16 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
Based on kubeadm installation instructions, we can’t directly install it on MacOS. But with the help of Vagrant and VirtualBox, we can quickly create a local kubenetes cluster.
It might be a dish foreign for you; in fact, it was foreign to me too. Ma-Po tofu originates from a region in China but far away from my hometown. The first time that I had it was in my first year of college. It and me, both had left home, just met in a small restaurant next to our campus. Gladly, I was not alone, neither was the dish. There were new friends of mine sitting around a table, and Ma-Po Tofu took the center, having always had the charm to attract people from all over the country. It may be welcomed by people from all over the world one day.
If we have a tiny web service which return the host name as below. We can use golang image and build the executable package, then move it into a basic linux container like alpine.
Monotonic Stack is the best time complexity solution for many “range queries in an array” problems. Because every element in the array could only enter the monotonic stack once, the time complexity is O(N). (N represents the length of the array).
Let’s denote dp[i][j] as the amount of distinct subsequences in s[:i] which can construct t[:j]. So we can get the state transition function dp[i][j] = s[i - 1] == t[i - 1] ? (dp[i - 1][j - 1] + dp[i - 1][j] : dp[i-1][j]. Also for the initial value, dp[i][0] needs to be 0 (it means there’s one way we can construct empty string from s[:i]).
If we define dp[i] as the longest increasing subsequence of [0, i]. Then dp[i] >= 1. And the state transition function is dp[i] = max(dp[i], dp[j] + 1) here j ∈ [0, i).
This is also a full knapsack problem. It looks similar to the coins change ii, but the difference here is that we need to get the permutation of the solutions instead of combination. So in this case we need to iterate the knapsack space first, then iterate the items.
Based on kubeadm installation instructions, we can’t directly install it on MacOS. But with the help of Vagrant and VirtualBox, we can quickly create a local kubenetes cluster.
It might be a dish foreign for you; in fact, it was foreign to me too. Ma-Po tofu originates from a region in China but far away from my hometown. The first time that I had it was in my first year of college. It and me, both had left home, just met in a small restaurant next to our campus. Gladly, I was not alone, neither was the dish. There were new friends of mine sitting around a table, and Ma-Po Tofu took the center, having always had the charm to attract people from all over the country. It may be welcomed by people from all over the world one day.
If we have a tiny web service which return the host name as below. We can use golang image and build the executable package, then move it into a basic linux container like alpine.
Monotonic Stack is the best time complexity solution for many “range queries in an array” problems. Because every element in the array could only enter the monotonic stack once, the time complexity is O(N). (N represents the length of the array).
Let’s denote dp[i][j] as the amount of distinct subsequences in s[:i] which can construct t[:j]. So we can get the state transition function dp[i][j] = s[i - 1] == t[i - 1] ? (dp[i - 1][j - 1] + dp[i - 1][j] : dp[i-1][j]. Also for the initial value, dp[i][0] needs to be 0 (it means there’s one way we can construct empty string from s[:i]).
If we define dp[i] as the longest increasing subsequence of [0, i]. Then dp[i] >= 1. And the state transition function is dp[i] = max(dp[i], dp[j] + 1) here j ∈ [0, i).
This is also a full knapsack problem. It looks similar to the coins change ii, but the difference here is that we need to get the permutation of the solutions instead of combination. So in this case we need to iterate the knapsack space first, then iterate the items.
\ No newline at end of file
diff --git a/docs/page/10/index.html b/docs/page/10/index.html
index 1852b2b14..a6dc7560b 100644
--- a/docs/page/10/index.html
+++ b/docs/page/10/index.html
@@ -1,16 +1,21 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
\ No newline at end of file
diff --git a/docs/page/2/index.html b/docs/page/2/index.html
index 653c09990..42afa91ca 100644
--- a/docs/page/2/index.html
+++ b/docs/page/2/index.html
@@ -1,12 +1,17 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
Dynamic Programming (commonly referred to as DP) is an algorithmic technique for solving a problem by recursively breaking it down into simpler subproblems and using the fact that the optimal solution to the overall problem depends upon the optimal solution to it’s individual subproblems. Here is an interesting Quora question How should I explain dynamic programming to a 4-year-old?.
Similar to other sgement related problems. The first thing we need to do is to sort the slice. Once we have a sorted segment slice, we can iterate over all items and merge them. Note there is one edge case we need to cover after the iteration, either we merged all segments into one or the last one can’t be merged into the previous segment.
To get a maximum sum, we need to convert as many negative numbers to positive ones. If there is still an odd times of converting number left, we just need to convert the smallest positive number to a negative one
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to global solution are best fit for Greedy.
The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
Dynamic Programming (commonly referred to as DP) is an algorithmic technique for solving a problem by recursively breaking it down into simpler subproblems and using the fact that the optimal solution to the overall problem depends upon the optimal solution to it’s individual subproblems. Here is an interesting Quora question How should I explain dynamic programming to a 4-year-old?.
Similar to other sgement related problems. The first thing we need to do is to sort the slice. Once we have a sorted segment slice, we can iterate over all items and merge them. Note there is one edge case we need to cover after the iteration, either we merged all segments into one or the last one can’t be merged into the previous segment.
To get a maximum sum, we need to convert as many negative numbers to positive ones. If there is still an odd times of converting number left, we just need to convert the smallest positive number to a negative one
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to global solution are best fit for Greedy.
Backtracking can also help us to get all subsets of a given slice. If Combination and Partitioning problems can be converted to get root-to-leaf paths during a tree DFS traversal, Subsets can be treated as getting all root-to-node paths during a tree DFS traversal.
Backtracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem at any point of time (by time, here, is referred to the time elapsed till reaching any level of the search tree). Usually we can consider backtracking as DFS recursively traversal.
Backtracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem at any point of time (by time, here, is referred to the time elapsed till reaching any level of the search tree). Usually we can consider backtracking as DFS recursively traversal.
\ No newline at end of file
diff --git a/docs/page/3/index.html b/docs/page/3/index.html
index fc959ffa7..46e34a369 100644
--- a/docs/page/3/index.html
+++ b/docs/page/3/index.html
@@ -1,10 +1,15 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
Based on the preorder traversal definition for a BST, the first element in the slice is always coming from the root node, we can split the rest elements into two parts from the element which is no less than the root node for child nodes.
A Heap is a special Tree-based data structure in which the tree is a complete binary tree. Generally, there are two types of Heap: Max-Heap (root node is greater than its child nodes) and Min-Heap (root node is smaller than its child nodes).
We can simply iterate over all items from the given string and compare the adjacent values each time with the help of stack before pushing the element in.
We can’t use the similar solution we did for Design a Queue with Stack. It is because unlike Stack, moving elements from one Queue to another one won’t change the sequence of elements. We have to pop out all previous elements added into the queue when adding a new element, in this way we can simulate a Stack.
Since Queue is FIFO but Stack is FILO. If we need to use Stack to implement a Queue, we need to use at least two Stacks. So we use one stack which only handle Push operations, and another Stack which only handle Pop/Peek operations. And we move elements from the Pop only Stack to the other one when Pop/Peek get called. It will reverse the FILO stack elements sequence after that. So we get a FIFO sequence.
Based on the preorder traversal definition for a BST, the first element in the slice is always coming from the root node, we can split the rest elements into two parts from the element which is no less than the root node for child nodes.
A Heap is a special Tree-based data structure in which the tree is a complete binary tree. Generally, there are two types of Heap: Max-Heap (root node is greater than its child nodes) and Min-Heap (root node is smaller than its child nodes).
We can simply iterate over all items from the given string and compare the adjacent values each time with the help of stack before pushing the element in.
We can’t use the similar solution we did for Design a Queue with Stack. It is because unlike Stack, moving elements from one Queue to another one won’t change the sequence of elements. We have to pop out all previous elements added into the queue when adding a new element, in this way we can simulate a Stack.
Since Queue is FIFO but Stack is FILO. If we need to use Stack to implement a Queue, we need to use at least two Stacks. So we use one stack which only handle Push operations, and another Stack which only handle Pop/Peek operations. And we move elements from the Pop only Stack to the other one when Pop/Peek get called. It will reverse the FILO stack elements sequence after that. So we get a FIFO sequence.
\ No newline at end of file
diff --git a/docs/page/4/index.html b/docs/page/4/index.html
index 8403869ce..dfeab24e3 100644
--- a/docs/page/4/index.html
+++ b/docs/page/4/index.html
@@ -1,13 +1,18 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
Search if a given string pattern (needle) is part of a target string (haystack) is a common problem. The naive approach is to use two nested loops with O(n * m) time complexity. KMP is a better way which has a better performance.
Search if a given string pattern (needle) is part of a target string (haystack) is a common problem. The naive approach is to use two nested loops with O(n * m) time complexity. KMP is a better way which has a better performance.
Let’s take a look at a easy problem on Leetcode 27. Remove Element. We will demonstrate how to remove an element from an array without allocating extra space for another array.
Usually when you get a problem about searching the common items between multiple strings, the brute-force solution’s time complexity is usually too high. We can use hashmap to lower the time complexity.
For a given linked list, it has 3 common methods: GetByIndex, AddTo(Head/Tail/ToIndex), Delele. Similar to SQL’s CURD. Let’s see how to design a linked list class. 707. Design Linked List
Let’s take a look at a easy problem on Leetcode 704. Binary Search. Besides the brute-force O(n) solution, it’s not hard to get the O(log(n)) solution from the constrains unique and sorted in ascending order. Binary search is one of the most basic algorithms we are using, but most people couldn’t get the right code.
When you are following functional programming style guide to write JavaScript, you may find that it’s hard to deal with asynchronous since async function always return promises. So code like below will resolve the promises at same time instead of waiting for them.
When you are following functional programming style guide to write JavaScript, you may find that it’s hard to deal with asynchronous since async function always return promises. So code like below will resolve the promises at same time instead of waiting for them.
\ No newline at end of file
diff --git a/docs/page/5/index.html b/docs/page/5/index.html
index 28822aec2..6fe5c840e 100644
--- a/docs/page/5/index.html
+++ b/docs/page/5/index.html
@@ -1,11 +1,16 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
HTTP requests and HTTP responses use header fields to send information about the HTTP messages. Header fields are colon-separated name-value pairs that are separated by a carriage return (CR) and a line feed (LF). A standard set of HTTP header fields is defined in RFC 2616. There are also non-standard HTTP headers available that are widely used by the applications. Some of the non-standard HTTP headers have an X-Forwarded prefix.
NodeJS stream is one of the most powerful modules built-in. If you need to serve files on S3 through NodeJS service, a good idea is to leverage the compatibility of stream, especially if you want to serve big files.
If you are an enterprise application developer, you may want to add watermark to your application. You can use below JS to applications like Confluence, Jira and so on. Just need to paste below JS code.
Google Colab is one of the best place to start your Machine Learning. Sometime you may want to upload images to the notebooks from your local. Fortunately you can easily make it done throught the built-in API.
Python introduced async/await syntax from Python3.5. it makes your code non-blocking and speedy. Developers can use it to build a high-performance / NIO web services like NodeJS. Most of the Python web developers are familiar with Flask. But unfortunately flask has no plan to support the async request headers. Sanic is a Flask-like webserver that’s written to go fast. It was inspired by uvloop.
A trained convolutional layer is made up of many feature detectors, called filters, which slide over an input image tensor as a moving window. This is a very powerful technique and it possesses several advantages over the flatten and classify method or deep learning.
One of the biggest headaches of using deep neural networks is that they have tons of hyperparameters that should be optimized so that the network performs optimally. Below are some notes coming from Deep Learning Quick Reference.
After reading several books about deep learning, now I can use keras / tensorflow to train some models, but the mathmatical implementations behind the libraries are still have to follow.
It is based on the logistic seperation of concerns of your application (or platform) into layers. And the layers must comply with the following points:
HTTP requests and HTTP responses use header fields to send information about the HTTP messages. Header fields are colon-separated name-value pairs that are separated by a carriage return (CR) and a line feed (LF). A standard set of HTTP header fields is defined in RFC 2616. There are also non-standard HTTP headers available that are widely used by the applications. Some of the non-standard HTTP headers have an X-Forwarded prefix.
NodeJS stream is one of the most powerful modules built-in. If you need to serve files on S3 through NodeJS service, a good idea is to leverage the compatibility of stream, especially if you want to serve big files.
If you are an enterprise application developer, you may want to add watermark to your application. You can use below JS to applications like Confluence, Jira and so on. Just need to paste below JS code.
Google Colab is one of the best place to start your Machine Learning. Sometime you may want to upload images to the notebooks from your local. Fortunately you can easily make it done throught the built-in API.
Python introduced async/await syntax from Python3.5. it makes your code non-blocking and speedy. Developers can use it to build a high-performance / NIO web services like NodeJS. Most of the Python web developers are familiar with Flask. But unfortunately flask has no plan to support the async request headers. Sanic is a Flask-like webserver that’s written to go fast. It was inspired by uvloop.
A trained convolutional layer is made up of many feature detectors, called filters, which slide over an input image tensor as a moving window. This is a very powerful technique and it possesses several advantages over the flatten and classify method or deep learning.
One of the biggest headaches of using deep neural networks is that they have tons of hyperparameters that should be optimized so that the network performs optimally. Below are some notes coming from Deep Learning Quick Reference.
After reading several books about deep learning, now I can use keras / tensorflow to train some models, but the mathmatical implementations behind the libraries are still have to follow.
It is based on the logistic seperation of concerns of your application (or platform) into layers. And the layers must comply with the following points:
\ No newline at end of file
diff --git a/docs/page/6/index.html b/docs/page/6/index.html
index 1c4e420c9..be0c0c5df 100644
--- a/docs/page/6/index.html
+++ b/docs/page/6/index.html
@@ -1,13 +1,18 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
If you enable HTTPS and set up the certifications correctly, which means data will not be decrypted or modified during the transportation. Today I try to enable SSL to my website. Here is what I did to make it happen:
\ No newline at end of file
diff --git a/docs/page/7/index.html b/docs/page/7/index.html
index aeaffe8e9..834c1e479 100644
--- a/docs/page/7/index.html
+++ b/docs/page/7/index.html
@@ -1,16 +1,21 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
昨晚临睡前看完一本很短的小说,比利时作家 Dimitri Verhulst 十多年前的作品,已被译为英文和中文的畅销书(De helaasheid der dingen; The misfortunates; 《废柴家族》)。这是一本被称作“半自传式”的小说,以一个十多岁少年的视角冷眼旁观自己的家庭——主要是爸爸和叔叔们,一群只能跟老母亲挤在老房子里蹭救济金的醉汉们。他们生活在连小小的比利时地图都可将其遗忘的小镇上(译者称作宝旮旯),每日烂醉、满口粗暴。略有腐臭的黄油和拉屎的味道不相上下,男孩比谁的膀胱能尿得更高更远,而女孩同样能依靠这种方式涓涓地引来一群小鱼。小说里浸透了琥珀色的啤酒,随处可见阴毛,呕吐物更是堆在满纸。可是这个家庭的温暖在兄弟们、叔侄们、父子间默默地储存着。这样的温暖是可以给敌人一记左勾拳的能量。说“敌人”这个词或许太过了,只是一种愤怒,一种可被称为无产阶级的一家人在面对小资产阶级的谩骂和指责时,爆发出来的本能吧。同样,高唱着生理反应的酒歌,这个家庭的热血也浇灭了民俗学家的学术和研究,眼见着后者显得如何虚伪和苍白。兄弟们、小清新的公主表妹、临终前的痴呆奶奶都能唱《采木耳之歌》——“奇迹时代不停息,眼见天干又物燥,我的木耳湿又润。鸡叫过一遍,鸡叫过两遍,我感觉爽翻天。”(引自译文)而偏偏翘首企盼这段歌词的民俗学家们就愣是没机会听得到。
昨晚临睡前看完一本很短的小说,比利时作家 Dimitri Verhulst 十多年前的作品,已被译为英文和中文的畅销书(De helaasheid der dingen; The misfortunates; 《废柴家族》)。这是一本被称作“半自传式”的小说,以一个十多岁少年的视角冷眼旁观自己的家庭——主要是爸爸和叔叔们,一群只能跟老母亲挤在老房子里蹭救济金的醉汉们。他们生活在连小小的比利时地图都可将其遗忘的小镇上(译者称作宝旮旯),每日烂醉、满口粗暴。略有腐臭的黄油和拉屎的味道不相上下,男孩比谁的膀胱能尿得更高更远,而女孩同样能依靠这种方式涓涓地引来一群小鱼。小说里浸透了琥珀色的啤酒,随处可见阴毛,呕吐物更是堆在满纸。可是这个家庭的温暖在兄弟们、叔侄们、父子间默默地储存着。这样的温暖是可以给敌人一记左勾拳的能量。说“敌人”这个词或许太过了,只是一种愤怒,一种可被称为无产阶级的一家人在面对小资产阶级的谩骂和指责时,爆发出来的本能吧。同样,高唱着生理反应的酒歌,这个家庭的热血也浇灭了民俗学家的学术和研究,眼见着后者显得如何虚伪和苍白。兄弟们、小清新的公主表妹、临终前的痴呆奶奶都能唱《采木耳之歌》——“奇迹时代不停息,眼见天干又物燥,我的木耳湿又润。鸡叫过一遍,鸡叫过两遍,我感觉爽翻天。”(引自译文)而偏偏翘首企盼这段歌词的民俗学家们就愣是没机会听得到。
\ No newline at end of file
diff --git a/docs/page/8/index.html b/docs/page/8/index.html
index 0b43d0bc8..f99435a54 100644
--- a/docs/page/8/index.html
+++ b/docs/page/8/index.html
@@ -1,17 +1,22 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
无论示爱还是投稿、递交申请或者简历,摇头的速度似乎总是比点头来得快。曾经在哪里见过科学研究是说点头更容易呢?在还胆敢称自己是25岁的时候,我们会有多么害怕遭到拒绝?有多么害怕陷入迷茫?有多么害怕未来不会是自己曾梦想过的画面?然而这样的害怕,可以有一个多久的时限? 最近我常常在想,我们的记忆有多长、我们的生命有多长。某种记不得名字的鱼类,听说它的记忆是三秒,在某些惶恐和灰色的时光里,我有多么羡慕它。然而,当真实地回到生活中——远离痛苦或快乐的幻觉——却又生生地感慨生命的短暂。在这短短的几十年中,好吧,假设一个100年的距离是我们的一生(如果带着呼吸机的日子也算作内),25岁在这个四分之一点上,它又能丈量多远?身处故纸堆的另一个快乐是任意地让自己回到历史上任何一个时间、任何一个地点、任何一群人当中去。那一个25岁的自己,它又能算作什么? 有些情绪是来自于生理期、有些是因为24小时内收到一封拒信(如果是接收函一般要好等个把月甚至一年)、还有些情绪是由于重看了一遍《欲望都市》(the sex and the city) 2008年电影版。这群女人从二十多岁穿着多于普拉达的一身走过四五十岁,从曼哈顿再回到曼哈顿,也陪我度过曾经的大学。如今看来,曾让我感到炫目的或者说偶尔会羡慕的标签似乎减少了。无论是她们的独立、勇敢,还是电视电影里除了爱情和性就一副不食人间烟火的“城里”生活,对于现在的我,都不重要。五年后,我重看她们,却在分分秒秒地拷问自己,“你有什么可失去的?”就像女主人公说,女孩们都是二十多岁来到纽约,然后为名牌和爱情,一路打拼,跨过四十岁就来到该付酒钱的阶段。借着之前的拒信,我便不合逻辑地扯到——25岁的我有什么可失去的?要如何才能积攒一些可以让将来那个40岁的自己可失去的东西? 眼下我回答不了这个问题,正如我们其实经常不断地抛出、然后再撇下一些问题在生活中、在学习中、在关系中一样。这是我一个人住的第一年,并不希望像正在女孩子们当中流行很广的“第五年”那样似乎要标榜点女孩的成长或者之类的东西。只是觉得,25岁,一个人住第一年,真的没什么可失去的。此刻我有个决定,虽然是拒信,一个回复也是要给的。感谢对方写我一封拒信的时间,感谢它给我一个再次启程的地方,趁着25岁还来得及的时候。
\ No newline at end of file
diff --git a/docs/page/9/index.html b/docs/page/9/index.html
index 9fc6d14d7..021b75dc8 100644
--- a/docs/page/9/index.html
+++ b/docs/page/9/index.html
@@ -1,17 +1,22 @@
-Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
+Beendless ~ 快节奏,慢生活,无止境 | Move Fast, Live Happily, With No End
-
\ No newline at end of file
diff --git a/docs/tags/Binary-Tree/index.html b/docs/tags/Binary-Tree/index.html
index 36c8443f7..9fa412a40 100644
--- a/docs/tags/Binary-Tree/index.html
+++ b/docs/tags/Binary-Tree/index.html
@@ -1,10 +1,15 @@
-Binary Tree | Beendless ~ 快节奏,慢生活,无止境
+Binary Tree | Beendless ~ 快节奏,慢生活,无止境
-
\ No newline at end of file
diff --git a/docs/tags/Deep-Learning-with-TensorFlow-2-and-Keras/index.html b/docs/tags/Deep-Learning-with-TensorFlow-2-and-Keras/index.html
index 0f5df6115..0e834e64f 100644
--- a/docs/tags/Deep-Learning-with-TensorFlow-2-and-Keras/index.html
+++ b/docs/tags/Deep-Learning-with-TensorFlow-2-and-Keras/index.html
@@ -1,10 +1,15 @@
-Deep Learning with TensorFlow 2 and Keras | Beendless ~ 快节奏,慢生活,无止境
+Deep Learning with TensorFlow 2 and Keras | Beendless ~ 快节奏,慢生活,无止境
-
\ No newline at end of file
diff --git a/docs/tags/Linked-List/index.html b/docs/tags/Linked-List/index.html
index 08081a6fc..a25bdb7ec 100644
--- a/docs/tags/Linked-List/index.html
+++ b/docs/tags/Linked-List/index.html
@@ -1,10 +1,15 @@
-Linked List | Beendless ~ 快节奏,慢生活,无止境
+Linked List | Beendless ~ 快节奏,慢生活,无止境
-
\ No newline at end of file
diff --git a/docs/tags/Mastering-Go/index.html b/docs/tags/Mastering-Go/index.html
index 35b14b512..6c896142a 100644
--- a/docs/tags/Mastering-Go/index.html
+++ b/docs/tags/Mastering-Go/index.html
@@ -1,10 +1,15 @@
-Mastering Go | Beendless ~ 快节奏,慢生活,无止境
+Mastering Go | Beendless ~ 快节奏,慢生活,无止境
-