PDA

View Full Version : how to separate different parts of speech in WTM?



Martinzh
08-31-2012, 09:16 PM
This is not an appropriate question in the title, but I could not find a better one. I'll describe the problem here in detail.

Taking Hebrew class, usually the Prof. requires the students to memorize the vocabs occur more than 70 times (Basic Hebrew, for example). Therefore, after taking this course, I'm supposed to know all the words which occur more than 70.

My purpose is to mark all the words occuring 69 times and less in BW, so that when I see them, I know that I haven't memorized them yet.

First I used the "Word List Manager" to make a list of the words occuring less than 70x. Then I used the "GSE" to search them in the WTM. Then I changed the color of all the words which have been found.

Every thing is perfect except the word list in the WTM. For example, the word פּרץ , in Van Pelt and Pratico's book 'The Vocabulary Guide to Biblical Hebrew,' the verb from occurs 46 times, and the noun form occurs 19 times. Therefore, neither the verb or the noun need to be memorized.
However in the WTM, the root פּרץ occurs 84 times. If I mark the words occuring less than 70x, this word wouldn't be marked. The problem is, when I see it, I suppose I must have memorized it whereas actually I haven't.

The problem of the WTM is, it grouped all froms which derives from the same root form regardless of whether they are different parts of speech and even homonyms (I guess).

I don't know if this problem can be solved or not?

bobvenem
09-02-2012, 09:44 AM
Going deeper into command line formatting for WTM, you will find that your example word can be searched by either verb or noun, and that these can be further searched by homonym. In the case of your example, searching for the noun form of Homonym 1 returns 19 hits (as per Van Pelt and Patico); the verb form regardless of homonym returns 50 hits (unlike the lexicon).

All of this is in the Help files.

Martinzh
09-03-2012, 07:43 PM
Going deeper into command line formatting for WTM, you will find that your example word can be searched by either verb or noun, and that these can be further searched by homonym. In the case of your example, searching for the noun form of Homonym 1 returns 19 hits (as per Van Pelt and Patico); the verb form regardless of homonym returns 50 hits (unlike the lexicon).

All of this is in the Help files.

Thank you very much bobvenem. I know that I can do it for a specific word. But in the situation I described above, I'm not dealing with a specific word. I want to research for all the words occur less than 70 times in Van Pelt and Practico's Vacabulary book, or all the words occur 47 times to 69 times, then change them into another font in order to differentiate them from the rest. How can I achieve this goal? (I guess this is better description of my problem.)

Thank you.

bobvenem
09-04-2012, 09:39 AM
Sorry about that. However, you should be able to do the same search by part of speech using the wildcard "*" instead of a lemma.Then, by exporting the results to the WLM, you can narrow the results to less than 69 hits for each part of speech list (you would be making a separate word list for each, and then combining them once you have shaved them down to the appropriate limits).

The only bit of work then would be to eliminate any words which are not in Van Pelt. If you have an electronic list of the words in Van Pelt, you could start off by creating a word list in the GSE; just insert the Van Pelt word list as an inclusion list before searching for parts of speech. That way your word lists will have only the Van Pelt words.

Hope this makes sense. Also, someone else will surely have a more efficient way to do this.

Martinzh
09-04-2012, 07:49 PM
Sorry about that. However, you should be able to do the same search by part of speech using the wildcard "*" instead of a lemma.Then, by exporting the results to the WLM, you can narrow the results to less than 69 hits for each part of speech list (you would be making a separate word list for each, and then combining them once you have shaved them down to the appropriate limits).

The only bit of work then would be to eliminate any words which are not in Van Pelt. If you have an electronic list of the words in Van Pelt, you could start off by creating a word list in the GSE; just insert the Van Pelt word list as an inclusion list before searching for parts of speech. That way your word lists will have only the Van Pelt words.

Hope this makes sense. Also, someone else will surely have a more efficient way to do this.

Thank you. the former one seems a little bit too complicated to me. I'm a beginner of BW.

as the latter one, can you explain a little bit more about that?
I don't have an electronic list of the exact words in Van Pelt, but I do have the word list of the vocab in his book Basics of Biblical Hebrew, which has most of the words in his Vocabulary book occur more than 70 times. Those words are in their lexical forms (the following picture is an example). Can you tell me more about how to do it with this list?
Thank you very much!
http://www.bibleworks.com/forums/image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFsAAAD+CAIAAAD ahe6TAAAH30lEQVR4nO2d2ZWDMAxFqYuCXMeUQDU0QzGaDzb5W V5YHLzo/oUAJ1yMIfghBlJshq9/QHEMnSMY+f1OKAc1gqgRRI0gmYws08h6KjPDhGGcFiKi2Vgf77 CvYhjM/PhnE+VsI7uDc2vdKUS0TNPTLdmkFG/k3H3nT5WmPN+QzXQFRlwBx7GzT3lDiNfI386ltblG/v7+XutZQckymdFSAocM62yOI+voJ8ZpsQ6P4wszBdvIcyP05rl m30Yz07b9XJIlZJ2Vbe0u5fRk5mUaWUe9zuEeipaFJ0aOZV88+ zIl2/afG2AJWSdvFqzjYPvA+2O7L4WjBo6U20b4et68HjlOMMZs23+c ckenhYCR9ZPXCDQiMxPrO+51IrQbgZW8eoXm9KZSM8/YRq6SvY1Ip4J1g6AnjPUj1lUMV2L1VcfG3P65ufsRovUn21els xFPDef1p3Cuca7s9v52NGYUZrhL5nNNhagRRI0gagSRjXSOGkH UCKJGEMHI9f6ocb7eR18jGPn9TigHNYKoEUSNIGoEUSOIGkFcI 2/fZ60NbSPI8IP7rHUx5L8XXxlD9vGa2iiwjUD4xoKNP7BhC5Ezf oADIe4UToH9SMiIPJATmHEfOj/ZlvOO7ZR4rglv6755YXGQ63KDTl4pJRrxt2t3dNCzuJvhOZfj6 Qth4RKNLNPo2+zZpBkRhs3ZgWSmmQ0yAwUaWabR2/MlGXGM8iPFGONbwXairctIoPl4Z1oPQd+IukPzRmaDqRRBCk9X NGoEz0hupu2UAgmcFo1Y6RQnVs2kmFlIaRVqJH6hEcDO6xyrE6 53WXTtWLp9I6ISq3+poh95aES4QHP/3HhWVKCRj2nJCP8/BA0g8BXSkhEiCj6dkvbgSmtGAqfu0Fmd0ZiRZRp9h0XgK4vGjK wdhtwSAl9xWjPyHDWCqBFEjSBqBFEjiGykcwQj+fdEuagRRI0g agRRI4gaQdQI4hppPWPkedb4oKc2chakWBb/jSNupJCMUSZSayYdRsrJoWXGGroSGDSrCBTcRuYZ+r95eqNSQo xi+xEnQIY1vkL5PTdQw0lPb54Tn2/QY3AUzskIhRONVmCk+vQmEdnj4eO0uFmYyIC5sUoEVZ/ePOGDLFKxGu9S9jKDGKmpJatoc6Qatq10+luJZRrdQn31pjc5x 6Z54x7hxY6Ph5Qa05ucbdPCZwcXa/7605ucZRrNfNWHZaSF9CZnmcZxTIgnOkt5euM605sc93IidSne RjynmmrSm5zEFAxg3QJpIL3JSUr0OsQDrXWlNzm32gg797aX3r x31DyiJSPHE1pufxz4CqnASHJHcvwHckN4ga+Qloy8Q3NGHgda WzOy+P/UBr7ilG/kx6ea0o18gBpB1AiiRhA1gshGOkcwkn9PlIsaQSo2kuk/T71GhNdNvEK9RogoS4CiZiP3bsLGqM/IcRcsFp+6iWuk9NTV/m6b1KdVr1JfG9n5hZEXU1eX697tE1Mr4cUSNTgQzklMXb2bzEu re2fNlZKlulBGkI/73khdvZ/eTKt7Z81lvdQqLiTN+oPU1evpzWt17+KJiAs3GFnDeJa6etPI5 bp30Td0XrrlymcuI3V1ue7dMnnDUOcqL1ywW8O6JaSuLlfw4k1 Ebi6XrlChjRSQunpQ08z3dVqRyX1eq4xrCamrB0Z8TwSlGhHDI 9+nrvIYiZyO7PeXWhO/T109qGl204iow/op36au7hvxBjyCRrw+SFby+9TVbSP+v/ihCxbvv772UlcfULKRe2FW30rOtYVvNZVsJLFcV+JKjiMrkiMp 2cg39GHEPk7qPWpewz5O9Ki5iBpBZCOdIxjJvyfKRY0gagRRI4 gaQdQIokYQ10jpaYncaBtBhkyjnPUyZBsJr5VBa10B2kYQ7UcQ PdcgagRRI4gaQdQIokYQNYLUa8QfyLAHp5qpvRklFFGBHFobtT fjhILmGO9tqPbmDaxUYIO1N28A6egyUuCf4uTFS0iBf4rbRgpI gX+KU6+2hBT4p3AjxaTAP0Wsafx1CvwXRFLEzrHSRu3NAJFnHF utvekl0YemwF+gEyM9vzk9QJ9vTg/Q55vTA/T65vQA+ub0m6gRRI0gagRRI4gaQdQIokYQNYKoEUSNIGoEUSOI bKRzBCP590S5qBFEjSBVG8lSDbxmI3mqgddshIgyVAOv28iVyo KpuEYqegYrXkbvBjW3kVdqYTlwI7U9g5XZSIXP6aVVA7+XAq/zWc5IyOispH0vBV5hG0mpBv4sBV6dET927UB7+ztMgRMR7yw0B b4ihFqHodcUOBHxcuuaAiciFkXUFDjZF/e9p8DXQ0O6/OgiBQ5sW+qcP/pJgQNC89joJAWOeJKImgJ/gU6MHPeo43cQOjHiVgT30omRC6gRRI0gagRRI4hspHMEI/n3RLmoEUSNIGoEUSOIGkHUCOIaqWfcNw/aRpChmXFf6TZhal09zlBzNoDDayVIw5vJToaK8yMW7kvYnalpS pppI/FgCM4cM0K19yM73mEHC/9bx1s716S+X12ar7VRTiIKCYFhiW7aiH9DZzNAHI3N2OIo50bq QcNnbHUkfCXyVnZrRhYMaDMtQUSukZS6el21kdS6em0bSbtgTb hmtSbm+8mZSazwFaITI/2mJXzPMvablnjh6U41gjRnxJt6T6UxIy+gRhDZSOcIRvLviXIR jfwDMSaRhAmUJz4AAAAASUVORK5CYII=

Martinzh
09-06-2012, 10:16 PM
Sorry about that. However, you should be able to do the same search by part of speech using the wildcard "*" instead of a lemma.Then, by exporting the results to the WLM, you can narrow the results to less than 69 hits for each part of speech list (you would be making a separate word list for each, and then combining them once you have shaved them down to the appropriate limits).

The only bit of work then would be to eliminate any words which are not in Van Pelt. If you have an electronic list of the words in Van Pelt, you could start off by creating a word list in the GSE; just insert the Van Pelt word list as an inclusion list before searching for parts of speech. That way your word lists will have only the Van Pelt words.

Hope this makes sense. Also, someone else will surely have a more efficient way to do this.

It is strange. I replied days ago and it was shown up.

Thank you very much for your kindly reply and patience.
I'm a beginner, your first suggestion seems a little bit too complex to me.
As for the second one, I don't have an electronic list of the words in Van Pelt, but I have a list of the words in his book Basics of Biblical Hebrew, which fits me better, cause those are the words I have memorized. These words are in Microsoft Word format, and with the vowels.
Can you tell me more in detail on how to do with this list?
Thank you very much!